Jul 2 07:47:28.784389 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:47:28.784406 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:47:28.784416 kernel: BIOS-provided physical RAM map: Jul 2 07:47:28.784422 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:47:28.784427 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 07:47:28.784432 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 07:47:28.784439 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 07:47:28.784445 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 07:47:28.784450 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 07:47:28.784457 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 07:47:28.784462 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 2 07:47:28.784468 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 2 07:47:28.784473 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 07:47:28.784479 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 07:47:28.784486 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 07:47:28.784493 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 07:47:28.784499 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 07:47:28.784504 kernel: NX (Execute Disable) protection: active Jul 2 07:47:28.784510 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Jul 2 07:47:28.784516 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Jul 2 07:47:28.784522 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Jul 2 07:47:28.784528 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Jul 2 07:47:28.784533 kernel: extended physical RAM map: Jul 2 07:47:28.784539 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:47:28.784545 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 07:47:28.784552 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 07:47:28.784558 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 07:47:28.784564 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 07:47:28.784570 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 07:47:28.784576 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 07:47:28.784582 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Jul 2 07:47:28.784588 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Jul 2 07:47:28.784593 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Jul 2 07:47:28.784599 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Jul 2 07:47:28.784605 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Jul 2 07:47:28.784611 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 2 07:47:28.784618 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 07:47:28.784624 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 07:47:28.784630 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 07:47:28.784636 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 07:47:28.784645 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 07:47:28.784651 kernel: efi: EFI v2.70 by EDK II Jul 2 07:47:28.784657 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Jul 2 07:47:28.784665 kernel: random: crng init done Jul 2 07:47:28.784671 kernel: SMBIOS 2.8 present. Jul 2 07:47:28.784677 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jul 2 07:47:28.784684 kernel: Hypervisor detected: KVM Jul 2 07:47:28.784690 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:47:28.784696 kernel: kvm-clock: cpu 0, msr 2c192001, primary cpu clock Jul 2 07:47:28.784717 kernel: kvm-clock: using sched offset of 4406773897 cycles Jul 2 07:47:28.784724 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:47:28.784739 kernel: tsc: Detected 2794.748 MHz processor Jul 2 07:47:28.784749 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:47:28.784756 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:47:28.784762 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 2 07:47:28.784769 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:47:28.784776 kernel: Using GB pages for direct mapping Jul 2 07:47:28.784782 kernel: Secure boot disabled Jul 2 07:47:28.784789 kernel: ACPI: Early table checksum verification disabled Jul 2 07:47:28.784795 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 2 07:47:28.784802 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jul 2 07:47:28.784810 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:47:28.784816 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:47:28.784823 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 2 07:47:28.784829 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:47:28.784836 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:47:28.784842 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:47:28.784849 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 2 07:47:28.784855 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jul 2 07:47:28.784862 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jul 2 07:47:28.784870 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 2 07:47:28.784876 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jul 2 07:47:28.784883 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jul 2 07:47:28.784889 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jul 2 07:47:28.784895 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jul 2 07:47:28.784902 kernel: No NUMA configuration found Jul 2 07:47:28.784908 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 2 07:47:28.784915 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 2 07:47:28.784922 kernel: Zone ranges: Jul 2 07:47:28.784929 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:47:28.784936 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 2 07:47:28.784942 kernel: Normal empty Jul 2 07:47:28.784949 kernel: Movable zone start for each node Jul 2 07:47:28.784955 kernel: Early memory node ranges Jul 2 07:47:28.784961 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 07:47:28.784968 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 2 07:47:28.784974 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 2 07:47:28.784981 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 2 07:47:28.784988 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 2 07:47:28.784995 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 2 07:47:28.785001 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 2 07:47:28.785008 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:47:28.785014 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 07:47:28.785021 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 2 07:47:28.785027 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:47:28.785034 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 2 07:47:28.785040 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 2 07:47:28.785048 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 2 07:47:28.785054 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 07:47:28.785061 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:47:28.785067 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:47:28.785074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 07:47:28.785080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:47:28.785087 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:47:28.785093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:47:28.785100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:47:28.785107 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:47:28.785114 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 07:47:28.785120 kernel: TSC deadline timer available Jul 2 07:47:28.785126 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 07:47:28.785133 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 07:47:28.785139 kernel: kvm-guest: setup PV sched yield Jul 2 07:47:28.785146 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jul 2 07:47:28.785152 kernel: Booting paravirtualized kernel on KVM Jul 2 07:47:28.785159 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:47:28.785165 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 2 07:47:28.785173 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 2 07:47:28.785180 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 2 07:47:28.785190 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 07:47:28.785198 kernel: kvm-guest: setup async PF for cpu 0 Jul 2 07:47:28.785204 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Jul 2 07:47:28.785211 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:47:28.785218 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:47:28.785225 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 2 07:47:28.785232 kernel: Policy zone: DMA32 Jul 2 07:47:28.785239 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:47:28.785247 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:47:28.785255 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:47:28.785262 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 07:47:28.785269 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:47:28.785276 kernel: Memory: 2398372K/2567000K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 168368K reserved, 0K cma-reserved) Jul 2 07:47:28.785284 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 07:47:28.785291 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:47:28.785298 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:47:28.785304 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:47:28.785312 kernel: rcu: RCU event tracing is enabled. Jul 2 07:47:28.785319 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 07:47:28.785326 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:47:28.785332 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:47:28.785339 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:47:28.785347 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 07:47:28.785354 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 07:47:28.785361 kernel: Console: colour dummy device 80x25 Jul 2 07:47:28.785368 kernel: printk: console [ttyS0] enabled Jul 2 07:47:28.785375 kernel: ACPI: Core revision 20210730 Jul 2 07:47:28.785382 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 07:47:28.785389 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:47:28.785395 kernel: x2apic enabled Jul 2 07:47:28.785402 kernel: Switched APIC routing to physical x2apic. Jul 2 07:47:28.785409 kernel: kvm-guest: setup PV IPIs Jul 2 07:47:28.785417 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 07:47:28.785424 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 07:47:28.785431 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 2 07:47:28.785437 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 07:47:28.785444 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 07:47:28.785451 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 07:47:28.785458 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:47:28.785465 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:47:28.785472 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:47:28.785480 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:47:28.785486 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 07:47:28.785493 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 07:47:28.785500 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:47:28.785507 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:47:28.785514 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:47:28.785521 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:47:28.785528 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:47:28.785535 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:47:28.785546 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:47:28.785555 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:47:28.785564 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:47:28.785573 kernel: LSM: Security Framework initializing Jul 2 07:47:28.785581 kernel: SELinux: Initializing. Jul 2 07:47:28.785589 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:47:28.785596 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:47:28.785603 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 07:47:28.785614 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 07:47:28.785623 kernel: ... version: 0 Jul 2 07:47:28.785632 kernel: ... bit width: 48 Jul 2 07:47:28.785641 kernel: ... generic registers: 6 Jul 2 07:47:28.785650 kernel: ... value mask: 0000ffffffffffff Jul 2 07:47:28.785659 kernel: ... max period: 00007fffffffffff Jul 2 07:47:28.785668 kernel: ... fixed-purpose events: 0 Jul 2 07:47:28.785677 kernel: ... event mask: 000000000000003f Jul 2 07:47:28.785686 kernel: signal: max sigframe size: 1776 Jul 2 07:47:28.785695 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:47:28.785715 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:47:28.785722 kernel: x86: Booting SMP configuration: Jul 2 07:47:28.785736 kernel: .... node #0, CPUs: #1 Jul 2 07:47:28.785745 kernel: kvm-clock: cpu 1, msr 2c192041, secondary cpu clock Jul 2 07:47:28.785753 kernel: kvm-guest: setup async PF for cpu 1 Jul 2 07:47:28.785760 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Jul 2 07:47:28.785767 kernel: #2 Jul 2 07:47:28.785774 kernel: kvm-clock: cpu 2, msr 2c192081, secondary cpu clock Jul 2 07:47:28.785781 kernel: kvm-guest: setup async PF for cpu 2 Jul 2 07:47:28.785789 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Jul 2 07:47:28.785796 kernel: #3 Jul 2 07:47:28.785802 kernel: kvm-clock: cpu 3, msr 2c1920c1, secondary cpu clock Jul 2 07:47:28.785809 kernel: kvm-guest: setup async PF for cpu 3 Jul 2 07:47:28.785816 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Jul 2 07:47:28.785823 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 07:47:28.785874 kernel: smpboot: Max logical packages: 1 Jul 2 07:47:28.785881 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 2 07:47:28.785888 kernel: devtmpfs: initialized Jul 2 07:47:28.785896 kernel: x86/mm: Memory block size: 128MB Jul 2 07:47:28.785903 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 2 07:47:28.785910 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 2 07:47:28.785917 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 2 07:47:28.785924 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 2 07:47:28.785931 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 2 07:47:28.785938 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:47:28.785945 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 07:47:28.785952 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:47:28.785960 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:47:28.785967 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:47:28.785974 kernel: audit: type=2000 audit(1719906449.476:1): state=initialized audit_enabled=0 res=1 Jul 2 07:47:28.785981 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:47:28.785988 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:47:28.785994 kernel: cpuidle: using governor menu Jul 2 07:47:28.786001 kernel: ACPI: bus type PCI registered Jul 2 07:47:28.786008 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:47:28.786015 kernel: dca service started, version 1.12.1 Jul 2 07:47:28.786023 kernel: PCI: Using configuration type 1 for base access Jul 2 07:47:28.786030 kernel: PCI: Using configuration type 1 for extended access Jul 2 07:47:28.786037 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:47:28.786044 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:47:28.786051 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:47:28.786057 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:47:28.786064 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:47:28.786071 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:47:28.786078 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:47:28.786086 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:47:28.786093 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:47:28.786100 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:47:28.786107 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:47:28.786113 kernel: ACPI: Interpreter enabled Jul 2 07:47:28.786120 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:47:28.786127 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:47:28.786134 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:47:28.786141 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 07:47:28.786149 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:47:28.786263 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:47:28.786274 kernel: acpiphp: Slot [3] registered Jul 2 07:47:28.786281 kernel: acpiphp: Slot [4] registered Jul 2 07:47:28.786288 kernel: acpiphp: Slot [5] registered Jul 2 07:47:28.786295 kernel: acpiphp: Slot [6] registered Jul 2 07:47:28.786302 kernel: acpiphp: Slot [7] registered Jul 2 07:47:28.786309 kernel: acpiphp: Slot [8] registered Jul 2 07:47:28.786317 kernel: acpiphp: Slot [9] registered Jul 2 07:47:28.786324 kernel: acpiphp: Slot [10] registered Jul 2 07:47:28.786340 kernel: acpiphp: Slot [11] registered Jul 2 07:47:28.786356 kernel: acpiphp: Slot [12] registered Jul 2 07:47:28.786363 kernel: acpiphp: Slot [13] registered Jul 2 07:47:28.786370 kernel: acpiphp: Slot [14] registered Jul 2 07:47:28.786376 kernel: acpiphp: Slot [15] registered Jul 2 07:47:28.786383 kernel: acpiphp: Slot [16] registered Jul 2 07:47:28.786390 kernel: acpiphp: Slot [17] registered Jul 2 07:47:28.786397 kernel: acpiphp: Slot [18] registered Jul 2 07:47:28.786405 kernel: acpiphp: Slot [19] registered Jul 2 07:47:28.786412 kernel: acpiphp: Slot [20] registered Jul 2 07:47:28.786419 kernel: acpiphp: Slot [21] registered Jul 2 07:47:28.786426 kernel: acpiphp: Slot [22] registered Jul 2 07:47:28.786433 kernel: acpiphp: Slot [23] registered Jul 2 07:47:28.786439 kernel: acpiphp: Slot [24] registered Jul 2 07:47:28.786446 kernel: acpiphp: Slot [25] registered Jul 2 07:47:28.786453 kernel: acpiphp: Slot [26] registered Jul 2 07:47:28.786460 kernel: acpiphp: Slot [27] registered Jul 2 07:47:28.786468 kernel: acpiphp: Slot [28] registered Jul 2 07:47:28.786475 kernel: acpiphp: Slot [29] registered Jul 2 07:47:28.786482 kernel: acpiphp: Slot [30] registered Jul 2 07:47:28.786488 kernel: acpiphp: Slot [31] registered Jul 2 07:47:28.786495 kernel: PCI host bridge to bus 0000:00 Jul 2 07:47:28.786578 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:47:28.786669 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:47:28.786769 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:47:28.786840 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 07:47:28.786900 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jul 2 07:47:28.786965 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:47:28.787045 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:47:28.787122 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 07:47:28.787198 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 07:47:28.787270 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 07:47:28.787337 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 07:47:28.787405 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 07:47:28.787473 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 07:47:28.787539 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 07:47:28.787616 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:47:28.787686 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 07:47:28.787782 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 07:47:28.787856 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 07:47:28.787925 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 2 07:47:28.787993 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jul 2 07:47:28.788059 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 2 07:47:28.788125 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jul 2 07:47:28.788191 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 07:47:28.788269 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:47:28.788338 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 07:47:28.788410 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 2 07:47:28.788498 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 2 07:47:28.788600 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 07:47:28.788672 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 07:47:28.788774 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 2 07:47:28.788851 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 2 07:47:28.791836 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:47:28.791913 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:47:28.791983 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jul 2 07:47:28.792051 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 2 07:47:28.792118 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 2 07:47:28.792127 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:47:28.792137 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:47:28.792144 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:47:28.792151 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:47:28.792158 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:47:28.792164 kernel: iommu: Default domain type: Translated Jul 2 07:47:28.792172 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:47:28.792239 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 07:47:28.792305 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 07:47:28.792373 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 07:47:28.792384 kernel: vgaarb: loaded Jul 2 07:47:28.792391 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:47:28.792398 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:47:28.792405 kernel: PTP clock support registered Jul 2 07:47:28.792412 kernel: Registered efivars operations Jul 2 07:47:28.792418 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:47:28.792425 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:47:28.792433 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 2 07:47:28.792439 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 2 07:47:28.792447 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Jul 2 07:47:28.792454 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Jul 2 07:47:28.792460 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 2 07:47:28.792467 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 2 07:47:28.792474 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 07:47:28.792481 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 07:47:28.792488 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:47:28.792494 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:47:28.792501 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:47:28.792509 kernel: pnp: PnP ACPI init Jul 2 07:47:28.792581 kernel: pnp 00:02: [dma 2] Jul 2 07:47:28.792591 kernel: pnp: PnP ACPI: found 6 devices Jul 2 07:47:28.792598 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:47:28.792605 kernel: NET: Registered PF_INET protocol family Jul 2 07:47:28.792612 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:47:28.792619 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 07:47:28.792626 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:47:28.792635 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 07:47:28.792642 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 07:47:28.792649 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 07:47:28.792656 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:47:28.792663 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:47:28.792671 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:47:28.792678 kernel: NET: Registered PF_XDP protocol family Jul 2 07:47:28.792772 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 2 07:47:28.792858 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 2 07:47:28.792920 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:47:28.792981 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:47:28.793041 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:47:28.793101 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 07:47:28.793160 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jul 2 07:47:28.793228 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 07:47:28.793296 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:47:28.793366 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 07:47:28.793375 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:47:28.793383 kernel: Initialise system trusted keyrings Jul 2 07:47:28.793390 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 07:47:28.793397 kernel: Key type asymmetric registered Jul 2 07:47:28.793405 kernel: Asymmetric key parser 'x509' registered Jul 2 07:47:28.793412 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:47:28.793419 kernel: io scheduler mq-deadline registered Jul 2 07:47:28.793426 kernel: io scheduler kyber registered Jul 2 07:47:28.793435 kernel: io scheduler bfq registered Jul 2 07:47:28.793443 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:47:28.793450 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:47:28.793457 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:47:28.793465 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:47:28.793472 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:47:28.793479 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:47:28.793486 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:47:28.793494 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:47:28.793502 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:47:28.793577 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 07:47:28.793589 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:47:28.793650 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 07:47:28.793734 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T07:47:28 UTC (1719906448) Jul 2 07:47:28.793802 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 07:47:28.793811 kernel: efifb: probing for efifb Jul 2 07:47:28.793818 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 2 07:47:28.793826 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 2 07:47:28.793833 kernel: efifb: scrolling: redraw Jul 2 07:47:28.793840 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 07:47:28.793848 kernel: Console: switching to colour frame buffer device 160x50 Jul 2 07:47:28.793855 kernel: fb0: EFI VGA frame buffer device Jul 2 07:47:28.793864 kernel: pstore: Registered efi as persistent store backend Jul 2 07:47:28.793871 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:47:28.793879 kernel: Segment Routing with IPv6 Jul 2 07:47:28.793886 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:47:28.793893 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:47:28.793900 kernel: Key type dns_resolver registered Jul 2 07:47:28.793907 kernel: IPI shorthand broadcast: enabled Jul 2 07:47:28.793915 kernel: sched_clock: Marking stable (409233327, 123333375)->(571650308, -39083606) Jul 2 07:47:28.793922 kernel: registered taskstats version 1 Jul 2 07:47:28.793929 kernel: Loading compiled-in X.509 certificates Jul 2 07:47:28.793937 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:47:28.793945 kernel: Key type .fscrypt registered Jul 2 07:47:28.793952 kernel: Key type fscrypt-provisioning registered Jul 2 07:47:28.793959 kernel: pstore: Using crash dump compression: deflate Jul 2 07:47:28.793966 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:47:28.793974 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:47:28.793981 kernel: ima: No architecture policies found Jul 2 07:47:28.793988 kernel: clk: Disabling unused clocks Jul 2 07:47:28.793996 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:47:28.794004 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:47:28.794011 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:47:28.794018 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:47:28.794026 kernel: Run /init as init process Jul 2 07:47:28.794033 kernel: with arguments: Jul 2 07:47:28.794041 kernel: /init Jul 2 07:47:28.794048 kernel: with environment: Jul 2 07:47:28.794055 kernel: HOME=/ Jul 2 07:47:28.794062 kernel: TERM=linux Jul 2 07:47:28.794070 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:47:28.794079 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:47:28.794088 systemd[1]: Detected virtualization kvm. Jul 2 07:47:28.794097 systemd[1]: Detected architecture x86-64. Jul 2 07:47:28.794104 systemd[1]: Running in initrd. Jul 2 07:47:28.794112 systemd[1]: No hostname configured, using default hostname. Jul 2 07:47:28.794119 systemd[1]: Hostname set to . Jul 2 07:47:28.794128 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:47:28.794136 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:47:28.794143 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:47:28.794151 systemd[1]: Reached target cryptsetup.target. Jul 2 07:47:28.794158 systemd[1]: Reached target paths.target. Jul 2 07:47:28.794166 systemd[1]: Reached target slices.target. Jul 2 07:47:28.794173 systemd[1]: Reached target swap.target. Jul 2 07:47:28.794181 systemd[1]: Reached target timers.target. Jul 2 07:47:28.794190 systemd[1]: Listening on iscsid.socket. Jul 2 07:47:28.794198 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:47:28.794205 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:47:28.794213 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:47:28.794221 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:47:28.794228 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:47:28.794236 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:47:28.794243 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:47:28.794253 systemd[1]: Reached target sockets.target. Jul 2 07:47:28.794263 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:47:28.794273 systemd[1]: Finished network-cleanup.service. Jul 2 07:47:28.794283 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:47:28.794293 systemd[1]: Starting systemd-journald.service... Jul 2 07:47:28.794303 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:47:28.794310 systemd[1]: Starting systemd-resolved.service... Jul 2 07:47:28.794318 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:47:28.794326 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:47:28.794337 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:47:28.794347 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:47:28.794357 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:47:28.794368 kernel: audit: type=1130 audit(1719906448.782:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.794378 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:47:28.794389 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:47:28.794400 kernel: audit: type=1130 audit(1719906448.790:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.794434 systemd-journald[198]: Journal started Jul 2 07:47:28.794478 systemd-journald[198]: Runtime Journal (/run/log/journal/62294ec3e6e04e038a6fa5062881eaee) is 6.0M, max 48.4M, 42.4M free. Jul 2 07:47:28.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.779759 systemd-modules-load[199]: Inserted module 'overlay' Jul 2 07:47:28.795989 systemd[1]: Started systemd-journald.service. Jul 2 07:47:28.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.800775 kernel: audit: type=1130 audit(1719906448.796:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.802723 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:47:28.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.807464 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:47:28.809069 kernel: audit: type=1130 audit(1719906448.803:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.810868 systemd-resolved[200]: Positive Trust Anchors: Jul 2 07:47:28.813274 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:47:28.810884 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:47:28.810911 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:47:28.813109 systemd-resolved[200]: Defaulting to hostname 'linux'. Jul 2 07:47:28.824864 kernel: audit: type=1130 audit(1719906448.814:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.813814 systemd[1]: Started systemd-resolved.service. Jul 2 07:47:28.815147 systemd[1]: Reached target nss-lookup.target. Jul 2 07:47:28.826978 kernel: Bridge firewalling registered Jul 2 07:47:28.826234 systemd-modules-load[199]: Inserted module 'br_netfilter' Jul 2 07:47:28.831492 dracut-cmdline[218]: dracut-dracut-053 Jul 2 07:47:28.833493 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:47:28.844720 kernel: SCSI subsystem initialized Jul 2 07:47:28.856737 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:47:28.856761 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:47:28.856770 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:47:28.859452 systemd-modules-load[199]: Inserted module 'dm_multipath' Jul 2 07:47:28.860100 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:47:28.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.861608 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:47:28.866415 kernel: audit: type=1130 audit(1719906448.860:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.871900 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:47:28.876173 kernel: audit: type=1130 audit(1719906448.871:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.900742 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:47:28.916736 kernel: iscsi: registered transport (tcp) Jul 2 07:47:28.937740 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:47:28.937760 kernel: QLogic iSCSI HBA Driver Jul 2 07:47:28.960552 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:47:28.964774 kernel: audit: type=1130 audit(1719906448.960:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:28.961600 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:47:29.004736 kernel: raid6: avx2x4 gen() 31034 MB/s Jul 2 07:47:29.021732 kernel: raid6: avx2x4 xor() 8269 MB/s Jul 2 07:47:29.038736 kernel: raid6: avx2x2 gen() 32169 MB/s Jul 2 07:47:29.055734 kernel: raid6: avx2x2 xor() 19215 MB/s Jul 2 07:47:29.072735 kernel: raid6: avx2x1 gen() 26585 MB/s Jul 2 07:47:29.089731 kernel: raid6: avx2x1 xor() 15348 MB/s Jul 2 07:47:29.106733 kernel: raid6: sse2x4 gen() 14790 MB/s Jul 2 07:47:29.123732 kernel: raid6: sse2x4 xor() 7571 MB/s Jul 2 07:47:29.140733 kernel: raid6: sse2x2 gen() 16456 MB/s Jul 2 07:47:29.157732 kernel: raid6: sse2x2 xor() 9840 MB/s Jul 2 07:47:29.174733 kernel: raid6: sse2x1 gen() 12405 MB/s Jul 2 07:47:29.192117 kernel: raid6: sse2x1 xor() 7809 MB/s Jul 2 07:47:29.192147 kernel: raid6: using algorithm avx2x2 gen() 32169 MB/s Jul 2 07:47:29.192158 kernel: raid6: .... xor() 19215 MB/s, rmw enabled Jul 2 07:47:29.192841 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:47:29.207730 kernel: xor: automatically using best checksumming function avx Jul 2 07:47:29.298745 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:47:29.306723 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:47:29.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:29.311000 audit: BPF prog-id=7 op=LOAD Jul 2 07:47:29.311000 audit: BPF prog-id=8 op=LOAD Jul 2 07:47:29.311536 systemd[1]: Starting systemd-udevd.service... Jul 2 07:47:29.313049 kernel: audit: type=1130 audit(1719906449.307:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:29.322985 systemd-udevd[401]: Using default interface naming scheme 'v252'. Jul 2 07:47:29.326642 systemd[1]: Started systemd-udevd.service. Jul 2 07:47:29.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:29.329300 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:47:29.339740 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jul 2 07:47:29.362489 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:47:29.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:29.365057 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:47:29.397290 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:47:29.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:29.436848 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 07:47:29.439735 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:47:29.449735 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:47:29.449793 kernel: AES CTR mode by8 optimization enabled Jul 2 07:47:29.451130 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:47:29.452210 kernel: GPT:9289727 != 19775487 Jul 2 07:47:29.452230 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:47:29.452240 kernel: GPT:9289727 != 19775487 Jul 2 07:47:29.452248 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:47:29.452257 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:47:29.456764 kernel: libata version 3.00 loaded. Jul 2 07:47:29.460749 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 07:47:29.463282 kernel: scsi host0: ata_piix Jul 2 07:47:29.463447 kernel: scsi host1: ata_piix Jul 2 07:47:29.463577 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 07:47:29.464474 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 07:47:29.480656 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:47:29.481933 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (443) Jul 2 07:47:29.491385 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:47:29.495599 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:47:29.497648 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:47:29.503040 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:47:29.505489 systemd[1]: Starting disk-uuid.service... Jul 2 07:47:29.511683 disk-uuid[517]: Primary Header is updated. Jul 2 07:47:29.511683 disk-uuid[517]: Secondary Entries is updated. Jul 2 07:47:29.511683 disk-uuid[517]: Secondary Header is updated. Jul 2 07:47:29.515441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:47:29.518731 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:47:29.522734 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:47:29.622798 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 07:47:29.624790 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 07:47:29.657239 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 07:47:29.657451 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 07:47:29.674734 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 07:47:30.520740 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:47:30.521232 disk-uuid[518]: The operation has completed successfully. Jul 2 07:47:30.543047 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:47:30.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.543123 systemd[1]: Finished disk-uuid.service. Jul 2 07:47:30.546986 systemd[1]: Starting verity-setup.service... Jul 2 07:47:30.559731 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 07:47:30.577905 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:47:30.579379 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:47:30.581442 systemd[1]: Finished verity-setup.service. Jul 2 07:47:30.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.636583 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:47:30.638118 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:47:30.638171 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:47:30.640235 systemd[1]: Starting ignition-setup.service... Jul 2 07:47:30.642254 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:47:30.649371 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:47:30.649402 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:47:30.649413 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:47:30.657191 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:47:30.666193 systemd[1]: Finished ignition-setup.service. Jul 2 07:47:30.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.668386 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:47:30.704415 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:47:30.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.705000 audit: BPF prog-id=9 op=LOAD Jul 2 07:47:30.706963 systemd[1]: Starting systemd-networkd.service... Jul 2 07:47:30.707138 ignition[643]: Ignition 2.14.0 Jul 2 07:47:30.707147 ignition[643]: Stage: fetch-offline Jul 2 07:47:30.707186 ignition[643]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:47:30.707195 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:47:30.707317 ignition[643]: parsed url from cmdline: "" Jul 2 07:47:30.707321 ignition[643]: no config URL provided Jul 2 07:47:30.707327 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:47:30.707335 ignition[643]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:47:30.707354 ignition[643]: op(1): [started] loading QEMU firmware config module Jul 2 07:47:30.707360 ignition[643]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 07:47:30.710789 ignition[643]: op(1): [finished] loading QEMU firmware config module Jul 2 07:47:30.757924 ignition[643]: parsing config with SHA512: d31a78b64dd3455c818ce25c2badc80c52d0ddf80faecd3065614e94730b671abfd75f6096c591ac319ef2fe8b399a63cf8012531f87410f927874b022f3414e Jul 2 07:47:30.764888 unknown[643]: fetched base config from "system" Jul 2 07:47:30.764902 unknown[643]: fetched user config from "qemu" Jul 2 07:47:30.765579 ignition[643]: fetch-offline: fetch-offline passed Jul 2 07:47:30.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.766730 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:47:30.765655 ignition[643]: Ignition finished successfully Jul 2 07:47:30.783330 systemd-networkd[711]: lo: Link UP Jul 2 07:47:30.783339 systemd-networkd[711]: lo: Gained carrier Jul 2 07:47:30.785231 systemd-networkd[711]: Enumeration completed Jul 2 07:47:30.785414 systemd[1]: Started systemd-networkd.service. Jul 2 07:47:30.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.786910 systemd[1]: Reached target network.target. Jul 2 07:47:30.787361 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 07:47:30.788141 systemd[1]: Starting ignition-kargs.service... Jul 2 07:47:30.789915 systemd[1]: Starting iscsiuio.service... Jul 2 07:47:30.793774 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:47:30.793979 systemd[1]: Started iscsiuio.service. Jul 2 07:47:30.796860 systemd[1]: Starting iscsid.service... Jul 2 07:47:30.799189 systemd-networkd[711]: eth0: Link UP Jul 2 07:47:30.799198 systemd-networkd[711]: eth0: Gained carrier Jul 2 07:47:30.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.801016 iscsid[718]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:47:30.801016 iscsid[718]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:47:30.801016 iscsid[718]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:47:30.801016 iscsid[718]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:47:30.801016 iscsid[718]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:47:30.801016 iscsid[718]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:47:30.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.801326 systemd[1]: Started iscsid.service. Jul 2 07:47:30.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.804150 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:47:30.812389 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:47:30.815571 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:47:30.816064 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:47:30.818354 systemd[1]: Reached target remote-fs.target. Jul 2 07:47:30.820030 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:47:30.820861 systemd-networkd[711]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:47:30.826130 ignition[714]: Ignition 2.14.0 Jul 2 07:47:30.826139 ignition[714]: Stage: kargs Jul 2 07:47:30.826234 ignition[714]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:47:30.826243 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:47:30.827281 ignition[714]: kargs: kargs passed Jul 2 07:47:30.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.828912 systemd[1]: Finished ignition-kargs.service. Jul 2 07:47:30.827313 ignition[714]: Ignition finished successfully Jul 2 07:47:30.830685 systemd[1]: Starting ignition-disks.service... Jul 2 07:47:30.834240 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:47:30.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.836475 ignition[737]: Ignition 2.14.0 Jul 2 07:47:30.836483 ignition[737]: Stage: disks Jul 2 07:47:30.836565 ignition[737]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:47:30.836572 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:47:30.837483 ignition[737]: disks: disks passed Jul 2 07:47:30.837511 ignition[737]: Ignition finished successfully Jul 2 07:47:30.841637 systemd[1]: Finished ignition-disks.service. Jul 2 07:47:30.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.842543 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:47:30.844176 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:47:30.845060 systemd[1]: Reached target local-fs.target. Jul 2 07:47:30.845851 systemd[1]: Reached target sysinit.target. Jul 2 07:47:30.846611 systemd[1]: Reached target basic.target. Jul 2 07:47:30.847640 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:47:30.856954 systemd-fsck[745]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 07:47:30.862149 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:47:30.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.863277 systemd[1]: Mounting sysroot.mount... Jul 2 07:47:30.869727 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:47:30.870028 systemd[1]: Mounted sysroot.mount. Jul 2 07:47:30.870501 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:47:30.872928 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:47:30.873434 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:47:30.873465 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:47:30.873483 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:47:30.875117 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:47:30.878659 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:47:30.885483 initrd-setup-root[755]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:47:30.889631 initrd-setup-root[763]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:47:30.892123 initrd-setup-root[771]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:47:30.894478 initrd-setup-root[779]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:47:30.916227 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:47:30.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.917205 systemd[1]: Starting ignition-mount.service... Jul 2 07:47:30.918988 systemd[1]: Starting sysroot-boot.service... Jul 2 07:47:30.921848 bash[796]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 07:47:30.928755 ignition[797]: INFO : Ignition 2.14.0 Jul 2 07:47:30.928755 ignition[797]: INFO : Stage: mount Jul 2 07:47:30.930337 ignition[797]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:47:30.930337 ignition[797]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:47:30.930337 ignition[797]: INFO : mount: mount passed Jul 2 07:47:30.930337 ignition[797]: INFO : Ignition finished successfully Jul 2 07:47:30.934845 systemd[1]: Finished ignition-mount.service. Jul 2 07:47:30.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:30.937077 systemd[1]: Finished sysroot-boot.service. Jul 2 07:47:30.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:31.061925 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.87 Jul 2 07:47:31.061941 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jul 2 07:47:31.588134 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:47:31.593728 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Jul 2 07:47:31.595902 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:47:31.595915 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:47:31.595924 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:47:31.599840 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:47:31.601466 systemd[1]: Starting ignition-files.service... Jul 2 07:47:31.613762 ignition[826]: INFO : Ignition 2.14.0 Jul 2 07:47:31.613762 ignition[826]: INFO : Stage: files Jul 2 07:47:31.615516 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:47:31.615516 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:47:31.615516 ignition[826]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:47:31.619309 ignition[826]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:47:31.619309 ignition[826]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:47:31.619309 ignition[826]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:47:31.619309 ignition[826]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:47:31.619309 ignition[826]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:47:31.619257 unknown[826]: wrote ssh authorized keys file for user: core Jul 2 07:47:31.627535 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:47:31.627535 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:47:31.645185 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 07:47:31.703414 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:47:31.705432 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:47:31.705432 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 07:47:32.056481 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 07:47:32.204778 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:47:32.204778 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:47:32.208452 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 07:47:32.473106 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 07:47:32.524855 systemd-networkd[711]: eth0: Gained IPv6LL Jul 2 07:47:32.924309 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:47:32.924309 ignition[826]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 07:47:32.928305 ignition[826]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:47:32.928305 ignition[826]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:47:32.928305 ignition[826]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 07:47:32.928305 ignition[826]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 07:47:32.928305 ignition[826]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:47:32.928305 ignition[826]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:47:32.928305 ignition[826]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 07:47:32.928305 ignition[826]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:47:32.928305 ignition[826]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:47:32.928305 ignition[826]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 07:47:32.928305 ignition[826]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:47:32.966809 ignition[826]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:47:32.968732 ignition[826]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 07:47:32.968732 ignition[826]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:47:32.968732 ignition[826]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:47:32.968732 ignition[826]: INFO : files: files passed Jul 2 07:47:32.968732 ignition[826]: INFO : Ignition finished successfully Jul 2 07:47:32.976933 systemd[1]: Finished ignition-files.service. Jul 2 07:47:32.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.978042 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:47:32.979256 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:47:32.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.979855 systemd[1]: Starting ignition-quench.service... Jul 2 07:47:32.986731 initrd-setup-root-after-ignition[852]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 2 07:47:32.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.982455 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:47:32.990182 initrd-setup-root-after-ignition[854]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:47:32.982567 systemd[1]: Finished ignition-quench.service. Jul 2 07:47:32.986672 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:47:32.990184 systemd[1]: Reached target ignition-complete.target. Jul 2 07:47:32.995586 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:47:33.006258 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:47:33.006329 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:47:33.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.008887 systemd[1]: Reached target initrd-fs.target. Jul 2 07:47:33.009743 systemd[1]: Reached target initrd.target. Jul 2 07:47:33.011887 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:47:33.013754 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:47:33.024233 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:47:33.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.026404 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:47:33.035745 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:47:33.037369 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:47:33.039159 systemd[1]: Stopped target timers.target. Jul 2 07:47:33.040674 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:47:33.041663 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:47:33.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.043355 systemd[1]: Stopped target initrd.target. Jul 2 07:47:33.044907 systemd[1]: Stopped target basic.target. Jul 2 07:47:33.046393 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:47:33.048161 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:47:33.049896 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:47:33.051667 systemd[1]: Stopped target remote-fs.target. Jul 2 07:47:33.053273 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:47:33.054961 systemd[1]: Stopped target sysinit.target. Jul 2 07:47:33.056481 systemd[1]: Stopped target local-fs.target. Jul 2 07:47:33.058045 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:47:33.059694 systemd[1]: Stopped target swap.target. Jul 2 07:47:33.061139 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:47:33.062136 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:47:33.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.063806 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:47:33.065374 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:47:33.066356 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:47:33.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.068012 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:47:33.069117 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:47:33.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.070946 systemd[1]: Stopped target paths.target. Jul 2 07:47:33.072402 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:47:33.073498 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:47:33.075318 systemd[1]: Stopped target slices.target. Jul 2 07:47:33.076841 systemd[1]: Stopped target sockets.target. Jul 2 07:47:33.078364 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:47:33.079533 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:47:33.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.081502 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:47:33.081584 systemd[1]: Stopped ignition-files.service. Jul 2 07:47:33.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.084681 systemd[1]: Stopping ignition-mount.service... Jul 2 07:47:33.086280 systemd[1]: Stopping iscsid.service... Jul 2 07:47:33.087677 iscsid[718]: iscsid shutting down. Jul 2 07:47:33.089027 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:47:33.091673 ignition[867]: INFO : Ignition 2.14.0 Jul 2 07:47:33.091673 ignition[867]: INFO : Stage: umount Jul 2 07:47:33.091673 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:47:33.091673 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:47:33.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.090507 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:47:33.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.098600 ignition[867]: INFO : umount: umount passed Jul 2 07:47:33.098600 ignition[867]: INFO : Ignition finished successfully Jul 2 07:47:33.090651 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:47:33.093361 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:47:33.096436 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:47:33.105160 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:47:33.106539 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:47:33.107470 systemd[1]: Stopped iscsid.service. Jul 2 07:47:33.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.109266 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:47:33.109334 systemd[1]: Stopped ignition-mount.service. Jul 2 07:47:33.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.112124 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:47:33.113085 systemd[1]: Closed iscsid.socket. Jul 2 07:47:33.114466 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:47:33.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.114497 systemd[1]: Stopped ignition-disks.service. Jul 2 07:47:33.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.116260 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:47:33.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.116290 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:47:33.117971 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:47:33.118002 systemd[1]: Stopped ignition-setup.service. Jul 2 07:47:33.119846 systemd[1]: Stopping iscsiuio.service... Jul 2 07:47:33.123376 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:47:33.124213 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:47:33.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.126839 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:47:33.127797 systemd[1]: Stopped iscsiuio.service. Jul 2 07:47:33.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.129864 systemd[1]: Stopped target network.target. Jul 2 07:47:33.131439 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:47:33.131466 systemd[1]: Closed iscsiuio.socket. Jul 2 07:47:33.133796 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:47:33.135550 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:47:33.138747 systemd-networkd[711]: eth0: DHCPv6 lease lost Jul 2 07:47:33.139861 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:47:33.141033 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:47:33.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.143175 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:47:33.143000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:47:33.143207 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:47:33.146312 systemd[1]: Stopping network-cleanup.service... Jul 2 07:47:33.147086 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:47:33.147970 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:47:33.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.150646 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:47:33.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.150677 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:47:33.152570 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:47:33.153357 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:47:33.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.156033 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:47:33.158080 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:47:33.158688 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:47:33.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.158808 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:47:33.163582 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:47:33.163730 systemd[1]: Stopped network-cleanup.service. Jul 2 07:47:33.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.166775 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:47:33.167000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:47:33.166887 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:47:33.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.169598 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:47:33.169642 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:47:33.171588 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:47:33.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.171622 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:47:33.173223 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:47:33.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.173258 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:47:33.175209 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:47:33.175243 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:47:33.176980 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:47:33.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.177019 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:47:33.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.179752 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:47:33.181633 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 07:47:33.181680 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 07:47:33.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.184463 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:47:33.184509 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:47:33.185508 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:47:33.185555 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:47:33.188268 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 07:47:33.188887 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:47:33.188985 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:47:33.198297 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:47:33.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.198410 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:47:33.199811 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:47:33.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.201530 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:47:33.201581 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:47:33.204074 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:47:33.217819 systemd[1]: Switching root. Jul 2 07:47:33.238947 systemd-journald[198]: Journal stopped Jul 2 07:47:35.800280 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Jul 2 07:47:35.800332 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:47:35.800348 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:47:35.800364 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:47:35.800379 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:47:35.800392 kernel: SELinux: policy capability open_perms=1 Jul 2 07:47:35.800407 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:47:35.800419 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:47:35.800431 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:47:35.800443 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:47:35.800455 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:47:35.800472 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:47:35.800485 kernel: kauditd_printk_skb: 65 callbacks suppressed Jul 2 07:47:35.800502 kernel: audit: type=1403 audit(1719906453.310:76): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:47:35.800517 systemd[1]: Successfully loaded SELinux policy in 46.073ms. Jul 2 07:47:35.800542 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.449ms. Jul 2 07:47:35.800554 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:47:35.800567 systemd[1]: Detected virtualization kvm. Jul 2 07:47:35.800579 systemd[1]: Detected architecture x86-64. Jul 2 07:47:35.800590 systemd[1]: Detected first boot. Jul 2 07:47:35.800600 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:47:35.800611 kernel: audit: type=1400 audit(1719906453.563:77): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:47:35.800620 kernel: audit: type=1400 audit(1719906453.563:78): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:47:35.800630 kernel: audit: type=1334 audit(1719906453.569:79): prog-id=10 op=LOAD Jul 2 07:47:35.800639 kernel: audit: type=1334 audit(1719906453.569:80): prog-id=10 op=UNLOAD Jul 2 07:47:35.800649 kernel: audit: type=1334 audit(1719906453.571:81): prog-id=11 op=LOAD Jul 2 07:47:35.800659 kernel: audit: type=1334 audit(1719906453.571:82): prog-id=11 op=UNLOAD Jul 2 07:47:35.800668 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:47:35.800679 kernel: audit: type=1400 audit(1719906453.605:83): avc: denied { associate } for pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:47:35.800689 kernel: audit: type=1300 audit(1719906453.605:83): arch=c000003e syscall=188 success=yes exit=0 a0=c0001558b2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:47:35.800699 kernel: audit: type=1327 audit(1719906453.605:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:47:35.800721 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:47:35.800734 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:47:35.800745 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:47:35.800755 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:47:35.800766 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:47:35.800776 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:47:35.800785 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:47:35.800796 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:47:35.800807 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:47:35.800818 systemd[1]: Created slice system-getty.slice. Jul 2 07:47:35.800828 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:47:35.800840 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:47:35.800850 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:47:35.800861 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:47:35.800871 systemd[1]: Created slice user.slice. Jul 2 07:47:35.800880 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:47:35.800892 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:47:35.800902 systemd[1]: Set up automount boot.automount. Jul 2 07:47:35.800911 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:47:35.800921 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:47:35.800931 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:47:35.800941 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:47:35.800951 systemd[1]: Reached target integritysetup.target. Jul 2 07:47:35.800962 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:47:35.800972 systemd[1]: Reached target remote-fs.target. Jul 2 07:47:35.800983 systemd[1]: Reached target slices.target. Jul 2 07:47:35.800993 systemd[1]: Reached target swap.target. Jul 2 07:47:35.801003 systemd[1]: Reached target torcx.target. Jul 2 07:47:35.801012 systemd[1]: Reached target veritysetup.target. Jul 2 07:47:35.801022 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:47:35.801033 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:47:35.801042 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:47:35.801052 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:47:35.801062 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:47:35.801072 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:47:35.801083 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:47:35.801092 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:47:35.801102 systemd[1]: Mounting media.mount... Jul 2 07:47:35.801112 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:35.801121 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:47:35.801131 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:47:35.801141 systemd[1]: Mounting tmp.mount... Jul 2 07:47:35.801151 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:47:35.801160 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.801171 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:47:35.801181 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:47:35.801191 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:47:35.801200 systemd[1]: Starting modprobe@drm.service... Jul 2 07:47:35.801210 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:47:35.801220 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:47:35.801229 systemd[1]: Starting modprobe@loop.service... Jul 2 07:47:35.801240 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:47:35.801251 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:47:35.801263 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:47:35.801281 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:47:35.801300 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:47:35.801314 systemd[1]: Stopped systemd-journald.service. Jul 2 07:47:35.801324 kernel: fuse: init (API version 7.34) Jul 2 07:47:35.801333 systemd[1]: Starting systemd-journald.service... Jul 2 07:47:35.801343 kernel: loop: module loaded Jul 2 07:47:35.801352 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:47:35.801362 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:47:35.801374 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:47:35.801384 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:47:35.801394 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:47:35.801404 systemd[1]: Stopped verity-setup.service. Jul 2 07:47:35.801414 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:35.801423 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:47:35.801436 systemd-journald[985]: Journal started Jul 2 07:47:35.801487 systemd-journald[985]: Runtime Journal (/run/log/journal/62294ec3e6e04e038a6fa5062881eaee) is 6.0M, max 48.4M, 42.4M free. Jul 2 07:47:33.310000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:47:33.563000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:47:33.563000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:47:33.569000 audit: BPF prog-id=10 op=LOAD Jul 2 07:47:33.569000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:47:33.571000 audit: BPF prog-id=11 op=LOAD Jul 2 07:47:33.571000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:47:33.605000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:47:33.605000 audit[900]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558b2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:47:33.605000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:47:33.607000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:47:33.607000 audit[900]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155989 a2=1ed a3=0 items=2 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:47:33.607000 audit: CWD cwd="/" Jul 2 07:47:33.607000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.607000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.607000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:47:35.671000 audit: BPF prog-id=12 op=LOAD Jul 2 07:47:35.671000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:47:35.671000 audit: BPF prog-id=13 op=LOAD Jul 2 07:47:35.671000 audit: BPF prog-id=14 op=LOAD Jul 2 07:47:35.671000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:47:35.671000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:47:35.672000 audit: BPF prog-id=15 op=LOAD Jul 2 07:47:35.672000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:47:35.672000 audit: BPF prog-id=16 op=LOAD Jul 2 07:47:35.673000 audit: BPF prog-id=17 op=LOAD Jul 2 07:47:35.673000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:47:35.673000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:47:35.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.680000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:47:35.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.779000 audit: BPF prog-id=18 op=LOAD Jul 2 07:47:35.779000 audit: BPF prog-id=19 op=LOAD Jul 2 07:47:35.779000 audit: BPF prog-id=20 op=LOAD Jul 2 07:47:35.779000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:47:35.779000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:47:35.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.798000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:47:35.798000 audit[985]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc2feef760 a2=4000 a3=7ffc2feef7fc items=0 ppid=1 pid=985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:47:35.798000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:47:35.670915 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:47:35.802822 systemd[1]: Started systemd-journald.service. Jul 2 07:47:33.604545 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:47:35.670927 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 07:47:33.604758 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:47:35.674251 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:47:33.604773 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:47:33.604797 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:47:33.604806 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:47:33.604830 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:47:35.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.604840 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:47:35.803560 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:47:33.605021 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:47:33.605051 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:47:33.605061 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:47:33.605720 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:47:33.605756 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:47:33.605777 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:47:33.605792 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:47:33.605811 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:47:33.605824 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:47:35.419571 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:47:35.419832 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:47:35.419928 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:47:35.420071 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:47:35.420114 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:47:35.804559 systemd[1]: Mounted media.mount. Jul 2 07:47:35.420166 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-07-02T07:47:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:47:35.805310 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:47:35.806155 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:47:35.807014 systemd[1]: Mounted tmp.mount. Jul 2 07:47:35.807941 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:47:35.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.809133 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:47:35.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.810144 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:47:35.810328 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:47:35.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.811506 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:47:35.811740 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:47:35.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.812818 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:47:35.813010 systemd[1]: Finished modprobe@drm.service. Jul 2 07:47:35.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.813983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:47:35.814172 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:47:35.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.815259 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:47:35.815455 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:47:35.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.816452 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:47:35.816622 systemd[1]: Finished modprobe@loop.service. Jul 2 07:47:35.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.817797 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:47:35.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.818922 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:47:35.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.820127 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:47:35.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.821484 systemd[1]: Reached target network-pre.target. Jul 2 07:47:35.823297 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:47:35.824915 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:47:35.825666 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:47:35.827006 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:47:35.828523 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:47:35.829378 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:47:35.830236 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:47:35.841051 systemd-journald[985]: Time spent on flushing to /var/log/journal/62294ec3e6e04e038a6fa5062881eaee is 17.382ms for 1165 entries. Jul 2 07:47:35.841051 systemd-journald[985]: System Journal (/var/log/journal/62294ec3e6e04e038a6fa5062881eaee) is 8.0M, max 195.6M, 187.6M free. Jul 2 07:47:35.889382 systemd-journald[985]: Received client request to flush runtime journal. Jul 2 07:47:35.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.831078 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.832086 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:47:35.834842 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:47:35.841321 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:47:35.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.843668 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:47:35.848403 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:47:35.849564 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:47:35.851494 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:47:35.854507 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:47:35.891893 udevadm[1008]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:47:35.856341 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:47:35.873552 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:47:35.875345 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:47:35.890483 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:47:35.894148 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:47:35.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.266162 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:47:36.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.267000 audit: BPF prog-id=21 op=LOAD Jul 2 07:47:36.267000 audit: BPF prog-id=22 op=LOAD Jul 2 07:47:36.267000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:47:36.267000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:47:36.268471 systemd[1]: Starting systemd-udevd.service... Jul 2 07:47:36.283682 systemd-udevd[1009]: Using default interface naming scheme 'v252'. Jul 2 07:47:36.296931 systemd[1]: Started systemd-udevd.service. Jul 2 07:47:36.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.298000 audit: BPF prog-id=23 op=LOAD Jul 2 07:47:36.299667 systemd[1]: Starting systemd-networkd.service... Jul 2 07:47:36.308000 audit: BPF prog-id=24 op=LOAD Jul 2 07:47:36.308000 audit: BPF prog-id=25 op=LOAD Jul 2 07:47:36.308000 audit: BPF prog-id=26 op=LOAD Jul 2 07:47:36.310272 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:47:36.328250 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:47:36.339563 systemd[1]: Started systemd-userdbd.service. Jul 2 07:47:36.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.351114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:47:36.365738 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:47:36.369736 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:47:36.381314 systemd-networkd[1019]: lo: Link UP Jul 2 07:47:36.381327 systemd-networkd[1019]: lo: Gained carrier Jul 2 07:47:36.381755 systemd-networkd[1019]: Enumeration completed Jul 2 07:47:36.381828 systemd[1]: Started systemd-networkd.service. Jul 2 07:47:36.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.383059 systemd-networkd[1019]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:47:36.383938 systemd-networkd[1019]: eth0: Link UP Jul 2 07:47:36.383946 systemd-networkd[1019]: eth0: Gained carrier Jul 2 07:47:36.394831 systemd-networkd[1019]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:47:36.396000 audit[1017]: AVC avc: denied { confidentiality } for pid=1017 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:47:36.396000 audit[1017]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5633a0c2d370 a1=3207c a2=7fc3961c9bc5 a3=5 items=108 ppid=1009 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:47:36.396000 audit: CWD cwd="/" Jul 2 07:47:36.396000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=1 name=(null) inode=14304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=2 name=(null) inode=14304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=3 name=(null) inode=14305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=4 name=(null) inode=14304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=5 name=(null) inode=14306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=6 name=(null) inode=14304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=7 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=8 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=9 name=(null) inode=14308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=10 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=11 name=(null) inode=14309 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=12 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=13 name=(null) inode=14310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=14 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=15 name=(null) inode=14311 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=16 name=(null) inode=14307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=17 name=(null) inode=14312 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=18 name=(null) inode=14304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=19 name=(null) inode=14313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=20 name=(null) inode=14313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=21 name=(null) inode=14314 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=22 name=(null) inode=14313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=23 name=(null) inode=14315 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=24 name=(null) inode=14313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=25 name=(null) inode=14316 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=26 name=(null) inode=14313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=27 name=(null) inode=14317 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=28 name=(null) inode=14313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=29 name=(null) inode=14318 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=30 name=(null) inode=14304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=31 name=(null) inode=14319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=32 name=(null) inode=14319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=33 name=(null) inode=14320 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=34 name=(null) inode=14319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=35 name=(null) inode=14321 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=36 name=(null) inode=14319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=37 name=(null) inode=14322 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=38 name=(null) inode=14319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=39 name=(null) inode=14323 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=40 name=(null) inode=14319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=41 name=(null) inode=14324 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=42 name=(null) inode=14304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=43 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=44 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=45 name=(null) inode=14326 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=46 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=47 name=(null) inode=14327 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=48 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=49 name=(null) inode=14328 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=50 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=51 name=(null) inode=14329 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=52 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=53 name=(null) inode=14330 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=55 name=(null) inode=14331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=56 name=(null) inode=14331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=57 name=(null) inode=14332 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=58 name=(null) inode=14331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=59 name=(null) inode=14333 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=60 name=(null) inode=14331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=61 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=62 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=63 name=(null) inode=14335 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=64 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=65 name=(null) inode=14336 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=66 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=67 name=(null) inode=16385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=68 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=69 name=(null) inode=16386 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=70 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=71 name=(null) inode=16387 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=72 name=(null) inode=14331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=73 name=(null) inode=16388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=74 name=(null) inode=16388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=75 name=(null) inode=16389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=76 name=(null) inode=16388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=77 name=(null) inode=16390 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=78 name=(null) inode=16388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=79 name=(null) inode=16391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=80 name=(null) inode=16388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=81 name=(null) inode=16392 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=82 name=(null) inode=16388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=83 name=(null) inode=16393 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=84 name=(null) inode=14331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=85 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=86 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=87 name=(null) inode=16395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=88 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=89 name=(null) inode=16396 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=90 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=91 name=(null) inode=16397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=92 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=93 name=(null) inode=16398 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=94 name=(null) inode=16394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=95 name=(null) inode=16399 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=96 name=(null) inode=14331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=97 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=98 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=99 name=(null) inode=16401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=100 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=101 name=(null) inode=16402 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=102 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=103 name=(null) inode=16403 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=104 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=105 name=(null) inode=16404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=106 name=(null) inode=16400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PATH item=107 name=(null) inode=16405 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:36.396000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:47:36.412731 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jul 2 07:47:36.428731 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 07:47:36.433728 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:47:36.470300 kernel: kvm: Nested Virtualization enabled Jul 2 07:47:36.470384 kernel: SVM: kvm: Nested Paging enabled Jul 2 07:47:36.470399 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 2 07:47:36.471735 kernel: SVM: Virtual GIF supported Jul 2 07:47:36.487796 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:47:36.508077 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:47:36.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.510198 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:47:36.516855 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:47:36.541699 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:47:36.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.543047 systemd[1]: Reached target cryptsetup.target. Jul 2 07:47:36.545439 systemd[1]: Starting lvm2-activation.service... Jul 2 07:47:36.548704 lvm[1046]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:47:36.571208 systemd[1]: Finished lvm2-activation.service. Jul 2 07:47:36.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.572286 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:47:36.573231 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:47:36.573255 systemd[1]: Reached target local-fs.target. Jul 2 07:47:36.574156 systemd[1]: Reached target machines.target. Jul 2 07:47:36.576115 systemd[1]: Starting ldconfig.service... Jul 2 07:47:36.577134 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:47:36.577177 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:36.577989 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:47:36.579624 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:47:36.582454 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:47:36.585225 systemd[1]: Starting systemd-sysext.service... Jul 2 07:47:36.586805 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1048 (bootctl) Jul 2 07:47:36.588574 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:47:36.591052 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:47:36.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.599320 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:47:36.604620 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:47:36.604795 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:47:36.612727 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 07:47:36.625545 systemd-fsck[1056]: fsck.fat 4.2 (2021-01-31) Jul 2 07:47:36.625545 systemd-fsck[1056]: /dev/vda1: 790 files, 119261/258078 clusters Jul 2 07:47:36.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.627251 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:47:36.630036 systemd[1]: Mounting boot.mount... Jul 2 07:47:36.765538 systemd[1]: Mounted boot.mount. Jul 2 07:47:36.771728 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:47:36.774087 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:47:36.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.776777 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:47:36.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.787730 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 07:47:36.792149 (sd-sysext)[1062]: Using extensions 'kubernetes'. Jul 2 07:47:36.792430 (sd-sysext)[1062]: Merged extensions into '/usr'. Jul 2 07:47:36.800963 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:47:36.808804 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:36.810147 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:47:36.811261 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:47:36.812435 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:47:36.814120 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:47:36.816034 systemd[1]: Starting modprobe@loop.service... Jul 2 07:47:36.816909 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:47:36.817051 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:36.817150 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:36.819844 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:47:36.820922 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:47:36.821023 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:47:36.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.822319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:47:36.822475 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:47:36.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.823802 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:47:36.823913 systemd[1]: Finished modprobe@loop.service. Jul 2 07:47:36.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.825067 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:47:36.825177 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:47:36.826601 systemd[1]: Finished systemd-sysext.service. Jul 2 07:47:36.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:36.828692 systemd[1]: Starting ensure-sysext.service... Jul 2 07:47:36.830237 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:47:36.835520 systemd[1]: Reloading. Jul 2 07:47:36.839592 ldconfig[1047]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:47:36.840220 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:47:36.840881 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:47:36.842936 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:47:36.884208 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2024-07-02T07:47:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:47:36.884513 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2024-07-02T07:47:36Z" level=info msg="torcx already run" Jul 2 07:47:36.938850 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:47:36.938865 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:47:36.955204 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:47:37.003000 audit: BPF prog-id=27 op=LOAD Jul 2 07:47:37.003000 audit: BPF prog-id=28 op=LOAD Jul 2 07:47:37.003000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:47:37.003000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:47:37.004000 audit: BPF prog-id=29 op=LOAD Jul 2 07:47:37.004000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:47:37.004000 audit: BPF prog-id=30 op=LOAD Jul 2 07:47:37.004000 audit: BPF prog-id=31 op=LOAD Jul 2 07:47:37.004000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:47:37.004000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:47:37.006000 audit: BPF prog-id=32 op=LOAD Jul 2 07:47:37.006000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:47:37.007000 audit: BPF prog-id=33 op=LOAD Jul 2 07:47:37.007000 audit: BPF prog-id=24 op=UNLOAD Jul 2 07:47:37.007000 audit: BPF prog-id=34 op=LOAD Jul 2 07:47:37.007000 audit: BPF prog-id=35 op=LOAD Jul 2 07:47:37.007000 audit: BPF prog-id=25 op=UNLOAD Jul 2 07:47:37.007000 audit: BPF prog-id=26 op=UNLOAD Jul 2 07:47:37.009825 systemd[1]: Finished ldconfig.service. Jul 2 07:47:37.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:37.011641 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:47:37.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:37.014893 systemd[1]: Starting audit-rules.service... Jul 2 07:47:37.016460 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:47:37.018432 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:47:37.019000 audit: BPF prog-id=36 op=LOAD Jul 2 07:47:37.021000 audit: BPF prog-id=37 op=LOAD Jul 2 07:47:37.020822 systemd[1]: Starting systemd-resolved.service... Jul 2 07:47:37.022952 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:47:37.024597 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:47:37.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:37.025853 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:47:37.028812 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:47:37.030474 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:37.030749 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:47:37.031852 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:47:37.031000 audit[1142]: SYSTEM_BOOT pid=1142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:47:37.033675 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:47:37.036245 systemd[1]: Starting modprobe@loop.service... Jul 2 07:47:37.037037 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:47:37.037160 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:37.037252 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:47:37.037312 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:37.038159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:47:37.038268 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:47:37.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:37.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:37.039649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:47:37.039762 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:47:37.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:37.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:37.041064 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:47:37.041168 systemd[1]: Finished modprobe@loop.service. Jul 2 07:47:37.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:37.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:37.044887 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:37.045099 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:47:37.046334 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:47:37.048225 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:47:37.050295 systemd[1]: Starting modprobe@loop.service... Jul 2 07:47:37.051091 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:47:37.051246 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:37.051381 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:47:37.051507 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:37.053103 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:47:37.054174 augenrules[1154]: No rules Jul 2 07:47:37.053000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:47:37.053000 audit[1154]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff584098f0 a2=420 a3=0 items=0 ppid=1131 pid=1154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:47:37.053000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:47:37.054965 systemd[1]: Finished audit-rules.service. Jul 2 07:47:37.056135 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:47:37.056310 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:47:37.057545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:47:37.057745 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:47:37.059094 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:47:37.059245 systemd[1]: Finished modprobe@loop.service. Jul 2 07:47:37.061344 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:47:37.066259 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:37.066450 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:47:37.067506 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:47:37.069231 systemd[1]: Starting modprobe@drm.service... Jul 2 07:47:37.071141 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:47:37.073153 systemd[1]: Starting modprobe@loop.service... Jul 2 07:47:37.074190 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:47:37.074314 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:37.075392 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:47:37.077671 systemd[1]: Starting systemd-update-done.service... Jul 2 07:47:37.078756 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:47:37.078888 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:37.080254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:47:37.080380 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:47:37.081616 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:47:37.081792 systemd[1]: Finished modprobe@drm.service. Jul 2 07:47:37.082911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:47:37.082998 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:47:37.084275 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:47:37.084368 systemd[1]: Finished modprobe@loop.service. Jul 2 07:47:37.085590 systemd[1]: Finished systemd-update-done.service. Jul 2 07:47:37.086857 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:47:38.519258 systemd-timesyncd[1141]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 07:47:38.519313 systemd-timesyncd[1141]: Initial clock synchronization to Tue 2024-07-02 07:47:38.519185 UTC. Jul 2 07:47:38.521634 systemd[1]: Finished ensure-sysext.service. Jul 2 07:47:38.523084 systemd-resolved[1137]: Positive Trust Anchors: Jul 2 07:47:38.523098 systemd-resolved[1137]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:47:38.523127 systemd-resolved[1137]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:47:38.523370 systemd[1]: Reached target time-set.target. Jul 2 07:47:38.524422 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:47:38.524454 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:47:38.529508 systemd-resolved[1137]: Defaulting to hostname 'linux'. Jul 2 07:47:38.530785 systemd[1]: Started systemd-resolved.service. Jul 2 07:47:38.531712 systemd[1]: Reached target network.target. Jul 2 07:47:38.532523 systemd[1]: Reached target nss-lookup.target. Jul 2 07:47:38.533366 systemd[1]: Reached target sysinit.target. Jul 2 07:47:38.534248 systemd[1]: Started motdgen.path. Jul 2 07:47:38.534990 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:47:38.536209 systemd[1]: Started logrotate.timer. Jul 2 07:47:38.537051 systemd[1]: Started mdadm.timer. Jul 2 07:47:38.537755 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:47:38.538626 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:47:38.538647 systemd[1]: Reached target paths.target. Jul 2 07:47:38.539435 systemd[1]: Reached target timers.target. Jul 2 07:47:38.540472 systemd[1]: Listening on dbus.socket. Jul 2 07:47:38.542073 systemd[1]: Starting docker.socket... Jul 2 07:47:38.544630 systemd[1]: Listening on sshd.socket. Jul 2 07:47:38.545500 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:38.545827 systemd[1]: Listening on docker.socket. Jul 2 07:47:38.546664 systemd[1]: Reached target sockets.target. Jul 2 07:47:38.547471 systemd[1]: Reached target basic.target. Jul 2 07:47:38.548315 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:47:38.548337 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:47:38.549092 systemd[1]: Starting containerd.service... Jul 2 07:47:38.550654 systemd[1]: Starting dbus.service... Jul 2 07:47:38.552151 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:47:38.553912 systemd[1]: Starting extend-filesystems.service... Jul 2 07:47:38.554914 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:47:38.556195 jq[1174]: false Jul 2 07:47:38.555793 systemd[1]: Starting motdgen.service... Jul 2 07:47:38.557371 systemd[1]: Starting prepare-helm.service... Jul 2 07:47:38.559032 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:47:38.560788 systemd[1]: Starting sshd-keygen.service... Jul 2 07:47:38.564899 systemd[1]: Starting systemd-logind.service... Jul 2 07:47:38.565722 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:38.565787 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:47:38.566207 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:47:38.566729 systemd[1]: Starting update-engine.service... Jul 2 07:47:38.568275 extend-filesystems[1175]: Found loop1 Jul 2 07:47:38.568430 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:47:38.570780 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:47:38.570948 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:47:38.571845 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:47:38.573114 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:47:38.575581 jq[1192]: true Jul 2 07:47:38.578150 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:47:38.578311 systemd[1]: Finished motdgen.service. Jul 2 07:47:38.579720 tar[1194]: linux-amd64/helm Jul 2 07:47:38.582821 dbus-daemon[1173]: [system] SELinux support is enabled Jul 2 07:47:38.583338 systemd[1]: Started dbus.service. Jul 2 07:47:38.585767 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:47:38.585792 systemd[1]: Reached target system-config.target. Jul 2 07:47:38.586760 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:47:38.586780 systemd[1]: Reached target user-config.target. Jul 2 07:47:38.587568 extend-filesystems[1175]: Found sr0 Jul 2 07:47:38.589083 extend-filesystems[1175]: Found vda Jul 2 07:47:38.589763 extend-filesystems[1175]: Found vda1 Jul 2 07:47:38.589763 extend-filesystems[1175]: Found vda2 Jul 2 07:47:38.589763 extend-filesystems[1175]: Found vda3 Jul 2 07:47:38.589763 extend-filesystems[1175]: Found usr Jul 2 07:47:38.589763 extend-filesystems[1175]: Found vda4 Jul 2 07:47:38.593231 extend-filesystems[1175]: Found vda6 Jul 2 07:47:38.593231 extend-filesystems[1175]: Found vda7 Jul 2 07:47:38.593231 extend-filesystems[1175]: Found vda9 Jul 2 07:47:38.593231 extend-filesystems[1175]: Checking size of /dev/vda9 Jul 2 07:47:38.596502 jq[1196]: true Jul 2 07:47:38.605410 update_engine[1190]: I0702 07:47:38.605119 1190 main.cc:92] Flatcar Update Engine starting Jul 2 07:47:38.608234 env[1195]: time="2024-07-02T07:47:38.607190138Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:47:38.609113 systemd[1]: Started update-engine.service. Jul 2 07:47:38.609313 update_engine[1190]: I0702 07:47:38.609158 1190 update_check_scheduler.cc:74] Next update check in 9m10s Jul 2 07:47:38.610631 extend-filesystems[1175]: Resized partition /dev/vda9 Jul 2 07:47:38.611205 systemd[1]: Started locksmithd.service. Jul 2 07:47:38.612489 extend-filesystems[1219]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:47:38.622986 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 07:47:38.623516 systemd-logind[1189]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:47:38.623711 systemd-logind[1189]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:47:38.634259 systemd-logind[1189]: New seat seat0. Jul 2 07:47:38.636630 systemd[1]: Started systemd-logind.service. Jul 2 07:47:38.646126 env[1195]: time="2024-07-02T07:47:38.646083427Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:47:38.646591 env[1195]: time="2024-07-02T07:47:38.646563377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:38.647980 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 07:47:38.670944 env[1195]: time="2024-07-02T07:47:38.649889885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:47:38.670944 env[1195]: time="2024-07-02T07:47:38.649986035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:38.671640 extend-filesystems[1219]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 07:47:38.671640 extend-filesystems[1219]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 07:47:38.671640 extend-filesystems[1219]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 07:47:38.676565 extend-filesystems[1175]: Resized filesystem in /dev/vda9 Jul 2 07:47:38.677573 env[1195]: time="2024-07-02T07:47:38.673680070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:47:38.677573 env[1195]: time="2024-07-02T07:47:38.673723231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:38.677573 env[1195]: time="2024-07-02T07:47:38.673736536Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:47:38.677573 env[1195]: time="2024-07-02T07:47:38.673752626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:38.677573 env[1195]: time="2024-07-02T07:47:38.673837505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:38.677573 env[1195]: time="2024-07-02T07:47:38.674072256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:38.677573 env[1195]: time="2024-07-02T07:47:38.674206537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:47:38.677573 env[1195]: time="2024-07-02T07:47:38.674220103Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:47:38.677573 env[1195]: time="2024-07-02T07:47:38.674273313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:47:38.677573 env[1195]: time="2024-07-02T07:47:38.674284644Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:47:38.671875 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:47:38.672044 systemd[1]: Finished extend-filesystems.service. Jul 2 07:47:38.679312 locksmithd[1218]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:47:38.682334 bash[1228]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:47:38.682984 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:47:38.684432 env[1195]: time="2024-07-02T07:47:38.684397078Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:47:38.684432 env[1195]: time="2024-07-02T07:47:38.684428497Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:47:38.684502 env[1195]: time="2024-07-02T07:47:38.684440981Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:47:38.684502 env[1195]: time="2024-07-02T07:47:38.684469314Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:47:38.684502 env[1195]: time="2024-07-02T07:47:38.684482158Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:47:38.684502 env[1195]: time="2024-07-02T07:47:38.684494832Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:47:38.684576 env[1195]: time="2024-07-02T07:47:38.684506283Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:47:38.684576 env[1195]: time="2024-07-02T07:47:38.684518716Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:47:38.684576 env[1195]: time="2024-07-02T07:47:38.684529938Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:47:38.684576 env[1195]: time="2024-07-02T07:47:38.684540958Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:47:38.684576 env[1195]: time="2024-07-02T07:47:38.684551698Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:47:38.684576 env[1195]: time="2024-07-02T07:47:38.684563480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:47:38.684690 env[1195]: time="2024-07-02T07:47:38.684637239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:47:38.684711 env[1195]: time="2024-07-02T07:47:38.684695528Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:47:38.684937 env[1195]: time="2024-07-02T07:47:38.684912595Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:47:38.684992 env[1195]: time="2024-07-02T07:47:38.684939486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.684992 env[1195]: time="2024-07-02T07:47:38.684968490Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:47:38.685040 env[1195]: time="2024-07-02T07:47:38.685009076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685040 env[1195]: time="2024-07-02T07:47:38.685020197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685040 env[1195]: time="2024-07-02T07:47:38.685032229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685095 env[1195]: time="2024-07-02T07:47:38.685042098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685095 env[1195]: time="2024-07-02T07:47:38.685053409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685095 env[1195]: time="2024-07-02T07:47:38.685064470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685095 env[1195]: time="2024-07-02T07:47:38.685075080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685095 env[1195]: time="2024-07-02T07:47:38.685084678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685095 env[1195]: time="2024-07-02T07:47:38.685095688Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:47:38.685219 env[1195]: time="2024-07-02T07:47:38.685192690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685219 env[1195]: time="2024-07-02T07:47:38.685206176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685219 env[1195]: time="2024-07-02T07:47:38.685217597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685279 env[1195]: time="2024-07-02T07:47:38.685228678Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:47:38.685279 env[1195]: time="2024-07-02T07:47:38.685242424Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:47:38.685279 env[1195]: time="2024-07-02T07:47:38.685252302Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:47:38.685279 env[1195]: time="2024-07-02T07:47:38.685268943Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:47:38.685360 env[1195]: time="2024-07-02T07:47:38.685303128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:47:38.685524 env[1195]: time="2024-07-02T07:47:38.685471634Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:47:38.686102 env[1195]: time="2024-07-02T07:47:38.685525675Z" level=info msg="Connect containerd service" Jul 2 07:47:38.686102 env[1195]: time="2024-07-02T07:47:38.685557985Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:47:38.686164 env[1195]: time="2024-07-02T07:47:38.686150046Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:47:38.686350 env[1195]: time="2024-07-02T07:47:38.686327138Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:47:38.686376 env[1195]: time="2024-07-02T07:47:38.686362384Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:47:38.686415 env[1195]: time="2024-07-02T07:47:38.686401457Z" level=info msg="containerd successfully booted in 0.079809s" Jul 2 07:47:38.686437 systemd[1]: Started containerd.service. Jul 2 07:47:38.686803 env[1195]: time="2024-07-02T07:47:38.686750752Z" level=info msg="Start subscribing containerd event" Jul 2 07:47:38.686832 env[1195]: time="2024-07-02T07:47:38.686823970Z" level=info msg="Start recovering state" Jul 2 07:47:38.687531 env[1195]: time="2024-07-02T07:47:38.687394730Z" level=info msg="Start event monitor" Jul 2 07:47:38.687531 env[1195]: time="2024-07-02T07:47:38.687419256Z" level=info msg="Start snapshots syncer" Jul 2 07:47:38.687531 env[1195]: time="2024-07-02T07:47:38.687429776Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:47:38.687531 env[1195]: time="2024-07-02T07:47:38.687437390Z" level=info msg="Start streaming server" Jul 2 07:47:38.977468 tar[1194]: linux-amd64/LICENSE Jul 2 07:47:38.977571 tar[1194]: linux-amd64/README.md Jul 2 07:47:38.981646 systemd[1]: Finished prepare-helm.service. Jul 2 07:47:39.525143 systemd-networkd[1019]: eth0: Gained IPv6LL Jul 2 07:47:39.526947 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:47:39.528382 systemd[1]: Reached target network-online.target. Jul 2 07:47:39.530816 systemd[1]: Starting kubelet.service... Jul 2 07:47:40.081686 systemd[1]: Started kubelet.service. Jul 2 07:47:40.289213 sshd_keygen[1198]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:47:40.306260 systemd[1]: Finished sshd-keygen.service. Jul 2 07:47:40.308688 systemd[1]: Starting issuegen.service... Jul 2 07:47:40.313276 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:47:40.313392 systemd[1]: Finished issuegen.service. Jul 2 07:47:40.315353 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:47:40.319613 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:47:40.321832 systemd[1]: Started getty@tty1.service. Jul 2 07:47:40.323730 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:47:40.324920 systemd[1]: Reached target getty.target. Jul 2 07:47:40.325826 systemd[1]: Reached target multi-user.target. Jul 2 07:47:40.327554 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:47:40.332674 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:47:40.332800 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:47:40.334069 systemd[1]: Startup finished in 540ms (kernel) + 4.613s (initrd) + 5.639s (userspace) = 10.793s. Jul 2 07:47:40.537078 kubelet[1243]: E0702 07:47:40.536997 1243 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:47:40.539104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:47:40.539223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:47:48.022798 systemd[1]: Created slice system-sshd.slice. Jul 2 07:47:48.023760 systemd[1]: Started sshd@0-10.0.0.87:22-10.0.0.1:56282.service. Jul 2 07:47:48.057660 sshd[1267]: Accepted publickey for core from 10.0.0.1 port 56282 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:47:48.059052 sshd[1267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:48.067015 systemd-logind[1189]: New session 1 of user core. Jul 2 07:47:48.067821 systemd[1]: Created slice user-500.slice. Jul 2 07:47:48.068838 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:47:48.076235 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:47:48.077345 systemd[1]: Starting user@500.service... Jul 2 07:47:48.079658 (systemd)[1270]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:48.143984 systemd[1270]: Queued start job for default target default.target. Jul 2 07:47:48.144390 systemd[1270]: Reached target paths.target. Jul 2 07:47:48.144409 systemd[1270]: Reached target sockets.target. Jul 2 07:47:48.144420 systemd[1270]: Reached target timers.target. Jul 2 07:47:48.144430 systemd[1270]: Reached target basic.target. Jul 2 07:47:48.144463 systemd[1270]: Reached target default.target. Jul 2 07:47:48.144495 systemd[1270]: Startup finished in 60ms. Jul 2 07:47:48.144565 systemd[1]: Started user@500.service. Jul 2 07:47:48.145408 systemd[1]: Started session-1.scope. Jul 2 07:47:48.195975 systemd[1]: Started sshd@1-10.0.0.87:22-10.0.0.1:56286.service. Jul 2 07:47:48.226871 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 56286 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:47:48.228123 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:48.231981 systemd-logind[1189]: New session 2 of user core. Jul 2 07:47:48.232865 systemd[1]: Started session-2.scope. Jul 2 07:47:48.287588 sshd[1279]: pam_unix(sshd:session): session closed for user core Jul 2 07:47:48.290157 systemd[1]: sshd@1-10.0.0.87:22-10.0.0.1:56286.service: Deactivated successfully. Jul 2 07:47:48.290675 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:47:48.291231 systemd-logind[1189]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:47:48.292186 systemd[1]: Started sshd@2-10.0.0.87:22-10.0.0.1:56290.service. Jul 2 07:47:48.293026 systemd-logind[1189]: Removed session 2. Jul 2 07:47:48.322538 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 56290 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:47:48.323597 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:48.327395 systemd-logind[1189]: New session 3 of user core. Jul 2 07:47:48.328099 systemd[1]: Started session-3.scope. Jul 2 07:47:48.381379 sshd[1285]: pam_unix(sshd:session): session closed for user core Jul 2 07:47:48.384723 systemd[1]: sshd@2-10.0.0.87:22-10.0.0.1:56290.service: Deactivated successfully. Jul 2 07:47:48.385337 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:47:48.385910 systemd-logind[1189]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:47:48.387047 systemd[1]: Started sshd@3-10.0.0.87:22-10.0.0.1:56296.service. Jul 2 07:47:48.387838 systemd-logind[1189]: Removed session 3. Jul 2 07:47:48.418360 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 56296 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:47:48.419676 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:48.423548 systemd-logind[1189]: New session 4 of user core. Jul 2 07:47:48.424666 systemd[1]: Started session-4.scope. Jul 2 07:47:48.477491 sshd[1292]: pam_unix(sshd:session): session closed for user core Jul 2 07:47:48.480537 systemd[1]: sshd@3-10.0.0.87:22-10.0.0.1:56296.service: Deactivated successfully. Jul 2 07:47:48.481027 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:47:48.481516 systemd-logind[1189]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:47:48.482375 systemd[1]: Started sshd@4-10.0.0.87:22-10.0.0.1:56310.service. Jul 2 07:47:48.483075 systemd-logind[1189]: Removed session 4. Jul 2 07:47:48.516083 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 56310 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:47:48.517076 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:48.520519 systemd-logind[1189]: New session 5 of user core. Jul 2 07:47:48.521292 systemd[1]: Started session-5.scope. Jul 2 07:47:48.576073 sudo[1301]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:47:48.576245 sudo[1301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:47:48.594539 systemd[1]: Starting docker.service... Jul 2 07:47:48.626928 env[1312]: time="2024-07-02T07:47:48.626875995Z" level=info msg="Starting up" Jul 2 07:47:48.628036 env[1312]: time="2024-07-02T07:47:48.628002087Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:47:48.628036 env[1312]: time="2024-07-02T07:47:48.628028226Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:47:48.628092 env[1312]: time="2024-07-02T07:47:48.628047051Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:47:48.628092 env[1312]: time="2024-07-02T07:47:48.628057451Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:47:48.629503 env[1312]: time="2024-07-02T07:47:48.629477905Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:47:48.629503 env[1312]: time="2024-07-02T07:47:48.629493574Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:47:48.629503 env[1312]: time="2024-07-02T07:47:48.629504034Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:47:48.629613 env[1312]: time="2024-07-02T07:47:48.629511157Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:47:48.907857 env[1312]: time="2024-07-02T07:47:48.907712634Z" level=info msg="Loading containers: start." Jul 2 07:47:49.022988 kernel: Initializing XFRM netlink socket Jul 2 07:47:49.049111 env[1312]: time="2024-07-02T07:47:49.049040414Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 07:47:49.093924 systemd-networkd[1019]: docker0: Link UP Jul 2 07:47:49.103061 env[1312]: time="2024-07-02T07:47:49.103031196Z" level=info msg="Loading containers: done." Jul 2 07:47:49.114391 env[1312]: time="2024-07-02T07:47:49.114322712Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:47:49.114611 env[1312]: time="2024-07-02T07:47:49.114580986Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 07:47:49.114748 env[1312]: time="2024-07-02T07:47:49.114723173Z" level=info msg="Daemon has completed initialization" Jul 2 07:47:49.139912 systemd[1]: Started docker.service. Jul 2 07:47:49.144577 env[1312]: time="2024-07-02T07:47:49.144523179Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:47:49.710483 env[1195]: time="2024-07-02T07:47:49.710419156Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 07:47:50.513372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923731312.mount: Deactivated successfully. Jul 2 07:47:50.789922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:47:50.790124 systemd[1]: Stopped kubelet.service. Jul 2 07:47:50.791306 systemd[1]: Starting kubelet.service... Jul 2 07:47:50.863534 systemd[1]: Started kubelet.service. Jul 2 07:47:51.070240 kubelet[1456]: E0702 07:47:51.069950 1456 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:47:51.073465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:47:51.073604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:47:52.821074 env[1195]: time="2024-07-02T07:47:52.820995316Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.822913 env[1195]: time="2024-07-02T07:47:52.822880682Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.834013 env[1195]: time="2024-07-02T07:47:52.833945993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.861156 env[1195]: time="2024-07-02T07:47:52.861132838Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.861772 env[1195]: time="2024-07-02T07:47:52.861739005Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 07:47:52.870183 env[1195]: time="2024-07-02T07:47:52.870154136Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 07:47:55.565717 env[1195]: time="2024-07-02T07:47:55.565653780Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:55.568556 env[1195]: time="2024-07-02T07:47:55.568504577Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:55.570967 env[1195]: time="2024-07-02T07:47:55.570927110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:55.573565 env[1195]: time="2024-07-02T07:47:55.573505696Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:55.574425 env[1195]: time="2024-07-02T07:47:55.574378072Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 07:47:55.585948 env[1195]: time="2024-07-02T07:47:55.585906892Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 07:47:56.940093 env[1195]: time="2024-07-02T07:47:56.940016862Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:56.942064 env[1195]: time="2024-07-02T07:47:56.942022734Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:56.947433 env[1195]: time="2024-07-02T07:47:56.947386462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:56.949206 env[1195]: time="2024-07-02T07:47:56.949151543Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:56.950204 env[1195]: time="2024-07-02T07:47:56.950171956Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 07:47:56.960806 env[1195]: time="2024-07-02T07:47:56.960770152Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 07:47:58.013807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089540251.mount: Deactivated successfully. Jul 2 07:47:58.863348 env[1195]: time="2024-07-02T07:47:58.863281329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:58.865208 env[1195]: time="2024-07-02T07:47:58.865165102Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:58.866581 env[1195]: time="2024-07-02T07:47:58.866550570Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:58.867895 env[1195]: time="2024-07-02T07:47:58.867867370Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:58.868213 env[1195]: time="2024-07-02T07:47:58.868165369Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 07:47:58.877974 env[1195]: time="2024-07-02T07:47:58.877918028Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:47:59.932729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2042605808.mount: Deactivated successfully. Jul 2 07:47:59.939515 env[1195]: time="2024-07-02T07:47:59.939480527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:59.941543 env[1195]: time="2024-07-02T07:47:59.941516946Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:59.942989 env[1195]: time="2024-07-02T07:47:59.942918294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:59.944228 env[1195]: time="2024-07-02T07:47:59.944206199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:59.944705 env[1195]: time="2024-07-02T07:47:59.944678445Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:47:59.955851 env[1195]: time="2024-07-02T07:47:59.955814369Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 07:48:00.695515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3881853911.mount: Deactivated successfully. Jul 2 07:48:01.227905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:48:01.228160 systemd[1]: Stopped kubelet.service. Jul 2 07:48:01.230002 systemd[1]: Starting kubelet.service... Jul 2 07:48:01.310499 systemd[1]: Started kubelet.service. Jul 2 07:48:01.637777 kubelet[1501]: E0702 07:48:01.637499 1501 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:48:01.639493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:48:01.639607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:48:05.075408 env[1195]: time="2024-07-02T07:48:05.075350526Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:05.079386 env[1195]: time="2024-07-02T07:48:05.079346891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:05.082350 env[1195]: time="2024-07-02T07:48:05.082317412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:05.084480 env[1195]: time="2024-07-02T07:48:05.084426778Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:05.085289 env[1195]: time="2024-07-02T07:48:05.085238169Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 07:48:05.093976 env[1195]: time="2024-07-02T07:48:05.093917586Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 07:48:05.671497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370603619.mount: Deactivated successfully. Jul 2 07:48:06.273972 env[1195]: time="2024-07-02T07:48:06.273878284Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:06.275911 env[1195]: time="2024-07-02T07:48:06.275878335Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:06.277640 env[1195]: time="2024-07-02T07:48:06.277602268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:06.279043 env[1195]: time="2024-07-02T07:48:06.278938594Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:06.279525 env[1195]: time="2024-07-02T07:48:06.279487453Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 07:48:08.660369 systemd[1]: Stopped kubelet.service. Jul 2 07:48:08.662247 systemd[1]: Starting kubelet.service... Jul 2 07:48:08.674385 systemd[1]: Reloading. Jul 2 07:48:08.728627 /usr/lib/systemd/system-generators/torcx-generator[1615]: time="2024-07-02T07:48:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:48:08.728655 /usr/lib/systemd/system-generators/torcx-generator[1615]: time="2024-07-02T07:48:08Z" level=info msg="torcx already run" Jul 2 07:48:09.182751 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:48:09.182767 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:48:09.199052 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:48:09.270081 systemd[1]: Started kubelet.service. Jul 2 07:48:09.272515 systemd[1]: Stopping kubelet.service... Jul 2 07:48:09.272721 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:48:09.272854 systemd[1]: Stopped kubelet.service. Jul 2 07:48:09.274061 systemd[1]: Starting kubelet.service... Jul 2 07:48:09.348979 systemd[1]: Started kubelet.service. Jul 2 07:48:09.392081 kubelet[1663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:48:09.392081 kubelet[1663]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:48:09.392081 kubelet[1663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:48:09.392458 kubelet[1663]: I0702 07:48:09.392128 1663 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:48:09.710994 kubelet[1663]: I0702 07:48:09.710950 1663 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:48:09.710994 kubelet[1663]: I0702 07:48:09.710987 1663 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:48:09.711193 kubelet[1663]: I0702 07:48:09.711180 1663 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:48:09.733737 kubelet[1663]: I0702 07:48:09.733691 1663 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:48:09.738741 kubelet[1663]: E0702 07:48:09.738686 1663 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:09.749346 kubelet[1663]: I0702 07:48:09.749317 1663 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:48:09.749521 kubelet[1663]: I0702 07:48:09.749506 1663 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:48:09.749690 kubelet[1663]: I0702 07:48:09.749671 1663 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:48:09.749800 kubelet[1663]: I0702 07:48:09.749696 1663 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:48:09.749800 kubelet[1663]: I0702 07:48:09.749707 1663 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:48:09.750318 kubelet[1663]: I0702 07:48:09.750293 1663 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:48:09.752089 kubelet[1663]: I0702 07:48:09.752065 1663 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:48:09.752089 kubelet[1663]: I0702 07:48:09.752084 1663 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:48:09.752168 kubelet[1663]: I0702 07:48:09.752105 1663 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:48:09.752168 kubelet[1663]: I0702 07:48:09.752119 1663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:48:09.752642 kubelet[1663]: W0702 07:48:09.752590 1663 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:09.752689 kubelet[1663]: E0702 07:48:09.752650 1663 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:09.752909 kubelet[1663]: W0702 07:48:09.752864 1663 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:09.752909 kubelet[1663]: E0702 07:48:09.752894 1663 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:09.753250 kubelet[1663]: I0702 07:48:09.753219 1663 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:48:09.756730 kubelet[1663]: W0702 07:48:09.756701 1663 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:48:09.757433 kubelet[1663]: I0702 07:48:09.757377 1663 server.go:1232] "Started kubelet" Jul 2 07:48:09.757862 kubelet[1663]: I0702 07:48:09.757688 1663 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:48:09.758024 kubelet[1663]: I0702 07:48:09.758004 1663 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:48:09.758073 kubelet[1663]: I0702 07:48:09.758053 1663 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:48:09.758389 kubelet[1663]: E0702 07:48:09.758371 1663 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:48:09.758484 kubelet[1663]: E0702 07:48:09.758470 1663 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:48:09.758837 kubelet[1663]: I0702 07:48:09.758795 1663 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:48:09.760193 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:48:09.760259 kubelet[1663]: E0702 07:48:09.758998 1663 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de55d708e021d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 9, 757344216, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 9, 757344216, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.87:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.87:6443: connect: connection refused'(may retry after sleeping) Jul 2 07:48:09.760357 kubelet[1663]: I0702 07:48:09.760301 1663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:48:09.760944 kubelet[1663]: I0702 07:48:09.760507 1663 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:48:09.761827 kubelet[1663]: I0702 07:48:09.761734 1663 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:48:09.761827 kubelet[1663]: I0702 07:48:09.761794 1663 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:48:09.761827 kubelet[1663]: E0702 07:48:09.761826 1663 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:48:09.761926 kubelet[1663]: W0702 07:48:09.761875 1663 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:09.761968 kubelet[1663]: E0702 07:48:09.761932 1663 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:09.762416 kubelet[1663]: E0702 07:48:09.762389 1663 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="200ms" Jul 2 07:48:09.778679 kubelet[1663]: I0702 07:48:09.778631 1663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:48:09.780699 kubelet[1663]: I0702 07:48:09.780643 1663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:48:09.780699 kubelet[1663]: I0702 07:48:09.780710 1663 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:48:09.780928 kubelet[1663]: I0702 07:48:09.780737 1663 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:48:09.781052 kubelet[1663]: E0702 07:48:09.781020 1663 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:48:09.782085 kubelet[1663]: W0702 07:48:09.782037 1663 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:09.782133 kubelet[1663]: E0702 07:48:09.782093 1663 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:09.783790 kubelet[1663]: I0702 07:48:09.783769 1663 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:48:09.783905 kubelet[1663]: I0702 07:48:09.783875 1663 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:48:09.784014 kubelet[1663]: I0702 07:48:09.784000 1663 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:48:09.862748 kubelet[1663]: I0702 07:48:09.862709 1663 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:48:09.863018 kubelet[1663]: E0702 07:48:09.863001 1663 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Jul 2 07:48:09.881177 kubelet[1663]: E0702 07:48:09.881118 1663 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:48:09.963934 kubelet[1663]: E0702 07:48:09.963813 1663 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="400ms" Jul 2 07:48:10.064208 kubelet[1663]: I0702 07:48:10.064174 1663 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:48:10.064558 kubelet[1663]: E0702 07:48:10.064540 1663 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Jul 2 07:48:10.078117 kubelet[1663]: I0702 07:48:10.078097 1663 policy_none.go:49] "None policy: Start" Jul 2 07:48:10.078850 kubelet[1663]: I0702 07:48:10.078838 1663 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:48:10.078921 kubelet[1663]: I0702 07:48:10.078860 1663 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:48:10.081888 kubelet[1663]: E0702 07:48:10.081842 1663 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:48:10.087519 systemd[1]: Created slice kubepods.slice. Jul 2 07:48:10.091614 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:48:10.094379 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:48:10.099597 kubelet[1663]: I0702 07:48:10.099560 1663 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:48:10.099995 kubelet[1663]: I0702 07:48:10.099780 1663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:48:10.100933 kubelet[1663]: E0702 07:48:10.100918 1663 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 07:48:10.365404 kubelet[1663]: E0702 07:48:10.365364 1663 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="800ms" Jul 2 07:48:10.466045 kubelet[1663]: I0702 07:48:10.466007 1663 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:48:10.466457 kubelet[1663]: E0702 07:48:10.466427 1663 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Jul 2 07:48:10.477829 kubelet[1663]: E0702 07:48:10.477707 1663 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de55d708e021d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 9, 757344216, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 9, 757344216, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.87:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.87:6443: connect: connection refused'(may retry after sleeping) Jul 2 07:48:10.482870 kubelet[1663]: I0702 07:48:10.482839 1663 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:48:10.483792 kubelet[1663]: I0702 07:48:10.483766 1663 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:48:10.484569 kubelet[1663]: I0702 07:48:10.484550 1663 topology_manager.go:215] "Topology Admit Handler" podUID="4fcb0ae580bc7d3935ca9e3b8e607872" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:48:10.489274 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jul 2 07:48:10.496233 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jul 2 07:48:10.498981 systemd[1]: Created slice kubepods-burstable-pod4fcb0ae580bc7d3935ca9e3b8e607872.slice. Jul 2 07:48:10.565579 kubelet[1663]: I0702 07:48:10.565541 1663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:48:10.565579 kubelet[1663]: I0702 07:48:10.565585 1663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:48:10.565869 kubelet[1663]: I0702 07:48:10.565609 1663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:48:10.565869 kubelet[1663]: I0702 07:48:10.565625 1663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:48:10.565869 kubelet[1663]: I0702 07:48:10.565644 1663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:48:10.565869 kubelet[1663]: I0702 07:48:10.565661 1663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4fcb0ae580bc7d3935ca9e3b8e607872-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fcb0ae580bc7d3935ca9e3b8e607872\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:48:10.565869 kubelet[1663]: I0702 07:48:10.565688 1663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:48:10.566006 kubelet[1663]: I0702 07:48:10.565798 1663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4fcb0ae580bc7d3935ca9e3b8e607872-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fcb0ae580bc7d3935ca9e3b8e607872\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:48:10.566006 kubelet[1663]: I0702 07:48:10.565853 1663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4fcb0ae580bc7d3935ca9e3b8e607872-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4fcb0ae580bc7d3935ca9e3b8e607872\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:48:10.796078 kubelet[1663]: E0702 07:48:10.796038 1663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:10.796687 env[1195]: time="2024-07-02T07:48:10.796622714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:10.797885 kubelet[1663]: E0702 07:48:10.797834 1663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:10.798271 env[1195]: time="2024-07-02T07:48:10.798246209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:10.800522 kubelet[1663]: E0702 07:48:10.800486 1663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:10.800948 env[1195]: time="2024-07-02T07:48:10.800896048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4fcb0ae580bc7d3935ca9e3b8e607872,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:10.923070 kubelet[1663]: W0702 07:48:10.922950 1663 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:10.923070 kubelet[1663]: E0702 07:48:10.923038 1663 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:10.955333 kubelet[1663]: W0702 07:48:10.955290 1663 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:10.955333 kubelet[1663]: E0702 07:48:10.955330 1663 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:11.149874 kubelet[1663]: W0702 07:48:11.149716 1663 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:11.149874 kubelet[1663]: E0702 07:48:11.149780 1663 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:11.166319 kubelet[1663]: E0702 07:48:11.166272 1663 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="1.6s" Jul 2 07:48:11.251227 kubelet[1663]: W0702 07:48:11.251156 1663 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:11.251227 kubelet[1663]: E0702 07:48:11.251222 1663 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Jul 2 07:48:11.268101 kubelet[1663]: I0702 07:48:11.268080 1663 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:48:11.268287 kubelet[1663]: E0702 07:48:11.268269 1663 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Jul 2 07:48:11.295755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462549559.mount: Deactivated successfully. Jul 2 07:48:11.299711 env[1195]: time="2024-07-02T07:48:11.299658864Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.303375 env[1195]: time="2024-07-02T07:48:11.303336968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.304347 env[1195]: time="2024-07-02T07:48:11.304295389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.305188 env[1195]: time="2024-07-02T07:48:11.305157113Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.307913 env[1195]: time="2024-07-02T07:48:11.307889571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.308882 env[1195]: time="2024-07-02T07:48:11.308850127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.310182 env[1195]: time="2024-07-02T07:48:11.310142193Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.311433 env[1195]: time="2024-07-02T07:48:11.311396966Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.312926 env[1195]: time="2024-07-02T07:48:11.312893456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.314531 env[1195]: time="2024-07-02T07:48:11.314490571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.316947 env[1195]: time="2024-07-02T07:48:11.316916988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.318246 env[1195]: time="2024-07-02T07:48:11.318209485Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:11.336434 env[1195]: time="2024-07-02T07:48:11.336370654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:11.336434 env[1195]: time="2024-07-02T07:48:11.336414428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:11.336434 env[1195]: time="2024-07-02T07:48:11.336425359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:11.336626 env[1195]: time="2024-07-02T07:48:11.336561281Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/086d42e5af8d18ecc7cbb0da1760e15f2f2112c5f6ae5e938177e62fb8df425e pid=1704 runtime=io.containerd.runc.v2 Jul 2 07:48:11.348833 env[1195]: time="2024-07-02T07:48:11.348763751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:11.349004 env[1195]: time="2024-07-02T07:48:11.348808488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:11.349004 env[1195]: time="2024-07-02T07:48:11.348823116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:11.349004 env[1195]: time="2024-07-02T07:48:11.348977754Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c2ca16bd496ea29be30d6071baffd8c45151ce28ccb9b3c7a48c1437ee1aafe pid=1728 runtime=io.containerd.runc.v2 Jul 2 07:48:11.349745 systemd[1]: Started cri-containerd-086d42e5af8d18ecc7cbb0da1760e15f2f2112c5f6ae5e938177e62fb8df425e.scope. Jul 2 07:48:11.361481 env[1195]: time="2024-07-02T07:48:11.361430537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:11.361610 env[1195]: time="2024-07-02T07:48:11.361587721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:11.361710 env[1195]: time="2024-07-02T07:48:11.361688446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:11.362017 env[1195]: time="2024-07-02T07:48:11.361952485Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c702131900b3ea8b11e232f1b3acd68e7a9e80d66d0257e02d9d2a675c0b6c61 pid=1753 runtime=io.containerd.runc.v2 Jul 2 07:48:11.372276 systemd[1]: Started cri-containerd-4c2ca16bd496ea29be30d6071baffd8c45151ce28ccb9b3c7a48c1437ee1aafe.scope. Jul 2 07:48:11.377782 systemd[1]: Started cri-containerd-c702131900b3ea8b11e232f1b3acd68e7a9e80d66d0257e02d9d2a675c0b6c61.scope. Jul 2 07:48:11.388784 env[1195]: time="2024-07-02T07:48:11.388742551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4fcb0ae580bc7d3935ca9e3b8e607872,Namespace:kube-system,Attempt:0,} returns sandbox id \"086d42e5af8d18ecc7cbb0da1760e15f2f2112c5f6ae5e938177e62fb8df425e\"" Jul 2 07:48:11.389657 kubelet[1663]: E0702 07:48:11.389627 1663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:11.392004 env[1195]: time="2024-07-02T07:48:11.391978732Z" level=info msg="CreateContainer within sandbox \"086d42e5af8d18ecc7cbb0da1760e15f2f2112c5f6ae5e938177e62fb8df425e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:48:11.409098 env[1195]: time="2024-07-02T07:48:11.409028184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c2ca16bd496ea29be30d6071baffd8c45151ce28ccb9b3c7a48c1437ee1aafe\"" Jul 2 07:48:11.409712 kubelet[1663]: E0702 07:48:11.409663 1663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:11.411226 env[1195]: time="2024-07-02T07:48:11.411205970Z" level=info msg="CreateContainer within sandbox \"4c2ca16bd496ea29be30d6071baffd8c45151ce28ccb9b3c7a48c1437ee1aafe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:48:11.416632 env[1195]: time="2024-07-02T07:48:11.416588125Z" level=info msg="CreateContainer within sandbox \"086d42e5af8d18ecc7cbb0da1760e15f2f2112c5f6ae5e938177e62fb8df425e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f950f39ada03f0bcc94ce59c937e697ea4f53624722e15dce2622756ba5c2b8\"" Jul 2 07:48:11.417055 env[1195]: time="2024-07-02T07:48:11.417023176Z" level=info msg="StartContainer for \"4f950f39ada03f0bcc94ce59c937e697ea4f53624722e15dce2622756ba5c2b8\"" Jul 2 07:48:11.425482 env[1195]: time="2024-07-02T07:48:11.425437528Z" level=info msg="CreateContainer within sandbox \"4c2ca16bd496ea29be30d6071baffd8c45151ce28ccb9b3c7a48c1437ee1aafe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc8c37af500276684016c556430f4169d768983c017bfd66476d9c9ea89945d0\"" Jul 2 07:48:11.425992 env[1195]: time="2024-07-02T07:48:11.425965368Z" level=info msg="StartContainer for \"fc8c37af500276684016c556430f4169d768983c017bfd66476d9c9ea89945d0\"" Jul 2 07:48:11.432457 env[1195]: time="2024-07-02T07:48:11.432158942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c702131900b3ea8b11e232f1b3acd68e7a9e80d66d0257e02d9d2a675c0b6c61\"" Jul 2 07:48:11.432988 kubelet[1663]: E0702 07:48:11.432951 1663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:11.433095 systemd[1]: Started cri-containerd-4f950f39ada03f0bcc94ce59c937e697ea4f53624722e15dce2622756ba5c2b8.scope. Jul 2 07:48:11.435185 env[1195]: time="2024-07-02T07:48:11.435159777Z" level=info msg="CreateContainer within sandbox \"c702131900b3ea8b11e232f1b3acd68e7a9e80d66d0257e02d9d2a675c0b6c61\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:48:11.446508 systemd[1]: Started cri-containerd-fc8c37af500276684016c556430f4169d768983c017bfd66476d9c9ea89945d0.scope. Jul 2 07:48:11.458909 env[1195]: time="2024-07-02T07:48:11.458781043Z" level=info msg="CreateContainer within sandbox \"c702131900b3ea8b11e232f1b3acd68e7a9e80d66d0257e02d9d2a675c0b6c61\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e2423feb82bc3cfd543156909986e2c8dc57c327e02b3b9d53edb088976a4554\"" Jul 2 07:48:11.459652 env[1195]: time="2024-07-02T07:48:11.459630053Z" level=info msg="StartContainer for \"e2423feb82bc3cfd543156909986e2c8dc57c327e02b3b9d53edb088976a4554\"" Jul 2 07:48:11.465868 env[1195]: time="2024-07-02T07:48:11.465819438Z" level=info msg="StartContainer for \"4f950f39ada03f0bcc94ce59c937e697ea4f53624722e15dce2622756ba5c2b8\" returns successfully" Jul 2 07:48:11.477362 systemd[1]: Started cri-containerd-e2423feb82bc3cfd543156909986e2c8dc57c327e02b3b9d53edb088976a4554.scope. Jul 2 07:48:11.489128 env[1195]: time="2024-07-02T07:48:11.489073774Z" level=info msg="StartContainer for \"fc8c37af500276684016c556430f4169d768983c017bfd66476d9c9ea89945d0\" returns successfully" Jul 2 07:48:11.516874 env[1195]: time="2024-07-02T07:48:11.516826169Z" level=info msg="StartContainer for \"e2423feb82bc3cfd543156909986e2c8dc57c327e02b3b9d53edb088976a4554\" returns successfully" Jul 2 07:48:11.787365 kubelet[1663]: E0702 07:48:11.787326 1663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:11.789187 kubelet[1663]: E0702 07:48:11.789166 1663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:11.790619 kubelet[1663]: E0702 07:48:11.790598 1663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:12.770219 kubelet[1663]: E0702 07:48:12.770177 1663 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 07:48:12.792972 kubelet[1663]: E0702 07:48:12.792927 1663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:12.869287 kubelet[1663]: I0702 07:48:12.869265 1663 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:48:12.873255 kubelet[1663]: I0702 07:48:12.873231 1663 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 07:48:13.755887 kubelet[1663]: I0702 07:48:13.755843 1663 apiserver.go:52] "Watching apiserver" Jul 2 07:48:13.762370 kubelet[1663]: I0702 07:48:13.762321 1663 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:48:14.919438 kubelet[1663]: E0702 07:48:14.919413 1663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:15.545091 systemd[1]: Reloading. Jul 2 07:48:15.616903 /usr/lib/systemd/system-generators/torcx-generator[1957]: time="2024-07-02T07:48:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:48:15.616936 /usr/lib/systemd/system-generators/torcx-generator[1957]: time="2024-07-02T07:48:15Z" level=info msg="torcx already run" Jul 2 07:48:15.674139 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:48:15.674154 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:48:15.691483 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:48:15.792828 systemd[1]: Stopping kubelet.service... Jul 2 07:48:15.813363 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:48:15.813528 systemd[1]: Stopped kubelet.service. Jul 2 07:48:15.814978 systemd[1]: Starting kubelet.service... Jul 2 07:48:15.887331 systemd[1]: Started kubelet.service. Jul 2 07:48:15.932583 kubelet[2003]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:48:15.932583 kubelet[2003]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:48:15.932583 kubelet[2003]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:48:15.933031 kubelet[2003]: I0702 07:48:15.932646 2003 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:48:15.937067 kubelet[2003]: I0702 07:48:15.937040 2003 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:48:15.937067 kubelet[2003]: I0702 07:48:15.937066 2003 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:48:15.937291 kubelet[2003]: I0702 07:48:15.937276 2003 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:48:15.938845 kubelet[2003]: I0702 07:48:15.938824 2003 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:48:15.940053 kubelet[2003]: I0702 07:48:15.940031 2003 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:48:15.942880 sudo[2018]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 07:48:15.943089 sudo[2018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 07:48:15.948112 kubelet[2003]: I0702 07:48:15.947948 2003 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:48:15.949735 kubelet[2003]: I0702 07:48:15.949147 2003 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:48:15.949735 kubelet[2003]: I0702 07:48:15.949353 2003 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:48:15.949735 kubelet[2003]: I0702 07:48:15.949376 2003 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:48:15.949735 kubelet[2003]: I0702 07:48:15.949387 2003 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:48:15.949735 kubelet[2003]: I0702 07:48:15.949425 2003 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:48:15.949735 kubelet[2003]: I0702 07:48:15.949517 2003 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:48:15.953513 kubelet[2003]: I0702 07:48:15.949533 2003 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:48:15.953513 kubelet[2003]: I0702 07:48:15.949556 2003 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:48:15.953513 kubelet[2003]: I0702 07:48:15.949568 2003 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:48:15.953513 kubelet[2003]: I0702 07:48:15.950398 2003 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:48:15.953513 kubelet[2003]: I0702 07:48:15.950773 2003 server.go:1232] "Started kubelet" Jul 2 07:48:15.953513 kubelet[2003]: I0702 07:48:15.952053 2003 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:48:15.955985 kubelet[2003]: I0702 07:48:15.954392 2003 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:48:15.955985 kubelet[2003]: I0702 07:48:15.955194 2003 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:48:15.955985 kubelet[2003]: I0702 07:48:15.955533 2003 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:48:15.955985 kubelet[2003]: I0702 07:48:15.955745 2003 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:48:15.955985 kubelet[2003]: I0702 07:48:15.955827 2003 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:48:15.955985 kubelet[2003]: I0702 07:48:15.955919 2003 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:48:15.956192 kubelet[2003]: I0702 07:48:15.956132 2003 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:48:15.963732 kubelet[2003]: E0702 07:48:15.963697 2003 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:48:15.965505 kubelet[2003]: E0702 07:48:15.965484 2003 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:48:15.965598 kubelet[2003]: E0702 07:48:15.965539 2003 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:48:15.987286 kubelet[2003]: I0702 07:48:15.987252 2003 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:48:15.989013 kubelet[2003]: I0702 07:48:15.988994 2003 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:48:15.989084 kubelet[2003]: I0702 07:48:15.989055 2003 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:48:15.989084 kubelet[2003]: I0702 07:48:15.989081 2003 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:48:15.989419 kubelet[2003]: E0702 07:48:15.989396 2003 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:48:16.018234 kubelet[2003]: I0702 07:48:16.018201 2003 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:48:16.018234 kubelet[2003]: I0702 07:48:16.018222 2003 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:48:16.018234 kubelet[2003]: I0702 07:48:16.018235 2003 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:48:16.018434 kubelet[2003]: I0702 07:48:16.018352 2003 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:48:16.018434 kubelet[2003]: I0702 07:48:16.018368 2003 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:48:16.018434 kubelet[2003]: I0702 07:48:16.018373 2003 policy_none.go:49] "None policy: Start" Jul 2 07:48:16.018893 kubelet[2003]: I0702 07:48:16.018867 2003 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:48:16.018944 kubelet[2003]: I0702 07:48:16.018900 2003 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:48:16.019058 kubelet[2003]: I0702 07:48:16.019034 2003 state_mem.go:75] "Updated machine memory state" Jul 2 07:48:16.022894 kubelet[2003]: I0702 07:48:16.022868 2003 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:48:16.023165 kubelet[2003]: I0702 07:48:16.023145 2003 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:48:16.067419 kubelet[2003]: I0702 07:48:16.067338 2003 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:48:16.073135 kubelet[2003]: I0702 07:48:16.073062 2003 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 07:48:16.073135 kubelet[2003]: I0702 07:48:16.073139 2003 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 07:48:16.089831 kubelet[2003]: I0702 07:48:16.089802 2003 topology_manager.go:215] "Topology Admit Handler" podUID="4fcb0ae580bc7d3935ca9e3b8e607872" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:48:16.089934 kubelet[2003]: I0702 07:48:16.089892 2003 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:48:16.089934 kubelet[2003]: I0702 07:48:16.089920 2003 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:48:16.094498 kubelet[2003]: E0702 07:48:16.094479 2003 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 07:48:16.157783 kubelet[2003]: I0702 07:48:16.157743 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4fcb0ae580bc7d3935ca9e3b8e607872-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fcb0ae580bc7d3935ca9e3b8e607872\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:48:16.157783 kubelet[2003]: I0702 07:48:16.157791 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:48:16.157987 kubelet[2003]: I0702 07:48:16.157816 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:48:16.157987 kubelet[2003]: I0702 07:48:16.157839 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:48:16.157987 kubelet[2003]: I0702 07:48:16.157864 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:48:16.157987 kubelet[2003]: I0702 07:48:16.157887 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:48:16.157987 kubelet[2003]: I0702 07:48:16.157911 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4fcb0ae580bc7d3935ca9e3b8e607872-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fcb0ae580bc7d3935ca9e3b8e607872\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:48:16.158104 kubelet[2003]: I0702 07:48:16.157934 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:48:16.158104 kubelet[2003]: I0702 07:48:16.157976 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4fcb0ae580bc7d3935ca9e3b8e607872-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4fcb0ae580bc7d3935ca9e3b8e607872\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:48:16.394629 kubelet[2003]: E0702 07:48:16.394533 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:16.394629 kubelet[2003]: E0702 07:48:16.394542 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:16.394922 kubelet[2003]: E0702 07:48:16.394904 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:16.424759 sudo[2018]: pam_unix(sudo:session): session closed for user root Jul 2 07:48:16.950699 kubelet[2003]: I0702 07:48:16.950643 2003 apiserver.go:52] "Watching apiserver" Jul 2 07:48:16.956943 kubelet[2003]: I0702 07:48:16.956902 2003 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:48:16.998082 kubelet[2003]: E0702 07:48:16.998051 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:16.998189 kubelet[2003]: E0702 07:48:16.998162 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:16.998511 kubelet[2003]: E0702 07:48:16.998491 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:17.025576 kubelet[2003]: I0702 07:48:17.025506 2003 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.025430849 podCreationTimestamp="2024-07-02 07:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:48:17.016540647 +0000 UTC m=+1.125595123" watchObservedRunningTime="2024-07-02 07:48:17.025430849 +0000 UTC m=+1.134485335" Jul 2 07:48:17.034816 kubelet[2003]: I0702 07:48:17.034766 2003 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.034720144 podCreationTimestamp="2024-07-02 07:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:48:17.033776278 +0000 UTC m=+1.142830764" watchObservedRunningTime="2024-07-02 07:48:17.034720144 +0000 UTC m=+1.143774620" Jul 2 07:48:17.035013 kubelet[2003]: I0702 07:48:17.034887 2003 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.034860993 podCreationTimestamp="2024-07-02 07:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:48:17.026400905 +0000 UTC m=+1.135455361" watchObservedRunningTime="2024-07-02 07:48:17.034860993 +0000 UTC m=+1.143915469" Jul 2 07:48:17.617949 sudo[1301]: pam_unix(sudo:session): session closed for user root Jul 2 07:48:17.619322 sshd[1298]: pam_unix(sshd:session): session closed for user core Jul 2 07:48:17.622175 systemd[1]: sshd@4-10.0.0.87:22-10.0.0.1:56310.service: Deactivated successfully. Jul 2 07:48:17.623103 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:48:17.623303 systemd[1]: session-5.scope: Consumed 4.164s CPU time. Jul 2 07:48:17.623791 systemd-logind[1189]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:48:17.624582 systemd-logind[1189]: Removed session 5. Jul 2 07:48:17.999411 kubelet[2003]: E0702 07:48:17.999370 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:19.746700 kubelet[2003]: E0702 07:48:19.746662 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:23.541489 update_engine[1190]: I0702 07:48:23.541437 1190 update_attempter.cc:509] Updating boot flags... Jul 2 07:48:25.150568 kubelet[2003]: E0702 07:48:25.150509 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:25.718124 kubelet[2003]: E0702 07:48:25.718095 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:26.008748 kubelet[2003]: E0702 07:48:26.008719 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:26.008748 kubelet[2003]: E0702 07:48:26.008720 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:28.720130 kubelet[2003]: I0702 07:48:28.720096 2003 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:48:28.720522 env[1195]: time="2024-07-02T07:48:28.720408662Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:48:28.720731 kubelet[2003]: I0702 07:48:28.720593 2003 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:48:29.361439 kubelet[2003]: I0702 07:48:29.361383 2003 topology_manager.go:215] "Topology Admit Handler" podUID="10c739fa-16a5-45f6-a430-ba984f83a9a0" podNamespace="kube-system" podName="cilium-wq942" Jul 2 07:48:29.365027 kubelet[2003]: I0702 07:48:29.364988 2003 topology_manager.go:215] "Topology Admit Handler" podUID="42b25ba6-bf9a-4f64-8eab-a1b4d881fadf" podNamespace="kube-system" podName="kube-proxy-tsbq7" Jul 2 07:48:29.369884 systemd[1]: Created slice kubepods-burstable-pod10c739fa_16a5_45f6_a430_ba984f83a9a0.slice. Jul 2 07:48:29.374020 kubelet[2003]: W0702 07:48:29.373988 2003 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 07:48:29.374235 kubelet[2003]: E0702 07:48:29.374176 2003 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 07:48:29.375364 systemd[1]: Created slice kubepods-besteffort-pod42b25ba6_bf9a_4f64_8eab_a1b4d881fadf.slice. Jul 2 07:48:29.453486 kubelet[2003]: I0702 07:48:29.453450 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-config-path\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.453748 kubelet[2003]: I0702 07:48:29.453715 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42b25ba6-bf9a-4f64-8eab-a1b4d881fadf-lib-modules\") pod \"kube-proxy-tsbq7\" (UID: \"42b25ba6-bf9a-4f64-8eab-a1b4d881fadf\") " pod="kube-system/kube-proxy-tsbq7" Jul 2 07:48:29.453895 kubelet[2003]: I0702 07:48:29.453875 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cni-path\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454023 kubelet[2003]: I0702 07:48:29.453914 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-lib-modules\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454023 kubelet[2003]: I0702 07:48:29.453942 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10c739fa-16a5-45f6-a430-ba984f83a9a0-hubble-tls\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454023 kubelet[2003]: I0702 07:48:29.454004 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42b25ba6-bf9a-4f64-8eab-a1b4d881fadf-xtables-lock\") pod \"kube-proxy-tsbq7\" (UID: \"42b25ba6-bf9a-4f64-8eab-a1b4d881fadf\") " pod="kube-system/kube-proxy-tsbq7" Jul 2 07:48:29.454136 kubelet[2003]: I0702 07:48:29.454037 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-host-proc-sys-kernel\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454136 kubelet[2003]: I0702 07:48:29.454064 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ndlk\" (UniqueName: \"kubernetes.io/projected/10c739fa-16a5-45f6-a430-ba984f83a9a0-kube-api-access-8ndlk\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454136 kubelet[2003]: I0702 07:48:29.454089 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-cgroup\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454136 kubelet[2003]: I0702 07:48:29.454110 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-etc-cni-netd\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454277 kubelet[2003]: I0702 07:48:29.454146 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-run\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454277 kubelet[2003]: I0702 07:48:29.454176 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-bpf-maps\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454277 kubelet[2003]: I0702 07:48:29.454201 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-hostproc\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454277 kubelet[2003]: I0702 07:48:29.454225 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-xtables-lock\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454277 kubelet[2003]: I0702 07:48:29.454251 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10c739fa-16a5-45f6-a430-ba984f83a9a0-clustermesh-secrets\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.454446 kubelet[2003]: I0702 07:48:29.454285 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42b25ba6-bf9a-4f64-8eab-a1b4d881fadf-kube-proxy\") pod \"kube-proxy-tsbq7\" (UID: \"42b25ba6-bf9a-4f64-8eab-a1b4d881fadf\") " pod="kube-system/kube-proxy-tsbq7" Jul 2 07:48:29.454446 kubelet[2003]: I0702 07:48:29.454316 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6sf\" (UniqueName: \"kubernetes.io/projected/42b25ba6-bf9a-4f64-8eab-a1b4d881fadf-kube-api-access-bz6sf\") pod \"kube-proxy-tsbq7\" (UID: \"42b25ba6-bf9a-4f64-8eab-a1b4d881fadf\") " pod="kube-system/kube-proxy-tsbq7" Jul 2 07:48:29.454446 kubelet[2003]: I0702 07:48:29.454347 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-host-proc-sys-net\") pod \"cilium-wq942\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " pod="kube-system/cilium-wq942" Jul 2 07:48:29.660603 kubelet[2003]: I0702 07:48:29.660509 2003 topology_manager.go:215] "Topology Admit Handler" podUID="cf8cace4-87f3-4c58-8cf8-fa93971be467" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-ch4dp" Jul 2 07:48:29.668628 systemd[1]: Created slice kubepods-besteffort-podcf8cace4_87f3_4c58_8cf8_fa93971be467.slice. Jul 2 07:48:29.675770 kubelet[2003]: E0702 07:48:29.675728 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:29.676358 env[1195]: time="2024-07-02T07:48:29.676310203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wq942,Uid:10c739fa-16a5-45f6-a430-ba984f83a9a0,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:29.710914 env[1195]: time="2024-07-02T07:48:29.710800340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:29.711176 env[1195]: time="2024-07-02T07:48:29.710926929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:29.711176 env[1195]: time="2024-07-02T07:48:29.710997824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:29.711176 env[1195]: time="2024-07-02T07:48:29.711158107Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875 pid=2112 runtime=io.containerd.runc.v2 Jul 2 07:48:29.725149 systemd[1]: Started cri-containerd-6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875.scope. Jul 2 07:48:29.746170 env[1195]: time="2024-07-02T07:48:29.746118585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wq942,Uid:10c739fa-16a5-45f6-a430-ba984f83a9a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\"" Jul 2 07:48:29.747249 kubelet[2003]: E0702 07:48:29.747217 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:29.748555 env[1195]: time="2024-07-02T07:48:29.748301117Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:48:29.753167 kubelet[2003]: E0702 07:48:29.752787 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:29.756892 kubelet[2003]: I0702 07:48:29.756300 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf8cace4-87f3-4c58-8cf8-fa93971be467-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-ch4dp\" (UID: \"cf8cace4-87f3-4c58-8cf8-fa93971be467\") " pod="kube-system/cilium-operator-6bc8ccdb58-ch4dp" Jul 2 07:48:29.756892 kubelet[2003]: I0702 07:48:29.756343 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxrm9\" (UniqueName: \"kubernetes.io/projected/cf8cace4-87f3-4c58-8cf8-fa93971be467-kube-api-access-kxrm9\") pod \"cilium-operator-6bc8ccdb58-ch4dp\" (UID: \"cf8cace4-87f3-4c58-8cf8-fa93971be467\") " pod="kube-system/cilium-operator-6bc8ccdb58-ch4dp" Jul 2 07:48:29.973583 kubelet[2003]: E0702 07:48:29.973073 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:29.973914 env[1195]: time="2024-07-02T07:48:29.973848614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-ch4dp,Uid:cf8cace4-87f3-4c58-8cf8-fa93971be467,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:29.990894 env[1195]: time="2024-07-02T07:48:29.990127368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:29.990894 env[1195]: time="2024-07-02T07:48:29.990166553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:29.990894 env[1195]: time="2024-07-02T07:48:29.990177323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:29.990894 env[1195]: time="2024-07-02T07:48:29.990352284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e pid=2154 runtime=io.containerd.runc.v2 Jul 2 07:48:30.002048 systemd[1]: Started cri-containerd-fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e.scope. Jul 2 07:48:30.016053 kubelet[2003]: E0702 07:48:30.015941 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:30.039947 env[1195]: time="2024-07-02T07:48:30.039902347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-ch4dp,Uid:cf8cace4-87f3-4c58-8cf8-fa93971be467,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e\"" Jul 2 07:48:30.040880 kubelet[2003]: E0702 07:48:30.040852 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:30.584427 kubelet[2003]: E0702 07:48:30.584389 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:30.585143 env[1195]: time="2024-07-02T07:48:30.584737947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tsbq7,Uid:42b25ba6-bf9a-4f64-8eab-a1b4d881fadf,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:30.597683 env[1195]: time="2024-07-02T07:48:30.597627282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:30.597683 env[1195]: time="2024-07-02T07:48:30.597660014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:30.597683 env[1195]: time="2024-07-02T07:48:30.597670444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:30.597861 env[1195]: time="2024-07-02T07:48:30.597777406Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24d8b6fc14235800c36a7ed2182f9161d95c2bcc753bbd04953c3e66161cd7a3 pid=2195 runtime=io.containerd.runc.v2 Jul 2 07:48:30.608922 systemd[1]: Started cri-containerd-24d8b6fc14235800c36a7ed2182f9161d95c2bcc753bbd04953c3e66161cd7a3.scope. Jul 2 07:48:30.628971 env[1195]: time="2024-07-02T07:48:30.628090984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tsbq7,Uid:42b25ba6-bf9a-4f64-8eab-a1b4d881fadf,Namespace:kube-system,Attempt:0,} returns sandbox id \"24d8b6fc14235800c36a7ed2182f9161d95c2bcc753bbd04953c3e66161cd7a3\"" Jul 2 07:48:30.629096 kubelet[2003]: E0702 07:48:30.628762 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:30.631495 env[1195]: time="2024-07-02T07:48:30.631459978Z" level=info msg="CreateContainer within sandbox \"24d8b6fc14235800c36a7ed2182f9161d95c2bcc753bbd04953c3e66161cd7a3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:48:30.648080 env[1195]: time="2024-07-02T07:48:30.648023744Z" level=info msg="CreateContainer within sandbox \"24d8b6fc14235800c36a7ed2182f9161d95c2bcc753bbd04953c3e66161cd7a3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a09cdaf2825629ca748da751ecb89db6ac79a18a802435dbb100d085f1148a4\"" Jul 2 07:48:30.648583 env[1195]: time="2024-07-02T07:48:30.648558425Z" level=info msg="StartContainer for \"3a09cdaf2825629ca748da751ecb89db6ac79a18a802435dbb100d085f1148a4\"" Jul 2 07:48:30.661693 systemd[1]: Started cri-containerd-3a09cdaf2825629ca748da751ecb89db6ac79a18a802435dbb100d085f1148a4.scope. Jul 2 07:48:30.688039 env[1195]: time="2024-07-02T07:48:30.687988811Z" level=info msg="StartContainer for \"3a09cdaf2825629ca748da751ecb89db6ac79a18a802435dbb100d085f1148a4\" returns successfully" Jul 2 07:48:31.019204 kubelet[2003]: E0702 07:48:31.019169 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:31.027819 kubelet[2003]: I0702 07:48:31.027784 2003 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tsbq7" podStartSLOduration=2.027746861 podCreationTimestamp="2024-07-02 07:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:48:31.027068077 +0000 UTC m=+15.136122533" watchObservedRunningTime="2024-07-02 07:48:31.027746861 +0000 UTC m=+15.136801317" Jul 2 07:48:36.699440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2002365941.mount: Deactivated successfully. Jul 2 07:48:41.227231 env[1195]: time="2024-07-02T07:48:41.227164406Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:41.230125 env[1195]: time="2024-07-02T07:48:41.230050361Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:41.231919 env[1195]: time="2024-07-02T07:48:41.231865530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:41.232399 env[1195]: time="2024-07-02T07:48:41.232352748Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:48:41.234679 env[1195]: time="2024-07-02T07:48:41.234642530Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:48:41.237251 env[1195]: time="2024-07-02T07:48:41.237211258Z" level=info msg="CreateContainer within sandbox \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:48:41.253843 env[1195]: time="2024-07-02T07:48:41.253791647Z" level=info msg="CreateContainer within sandbox \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0\"" Jul 2 07:48:41.254474 env[1195]: time="2024-07-02T07:48:41.254424018Z" level=info msg="StartContainer for \"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0\"" Jul 2 07:48:41.272575 systemd[1]: Started cri-containerd-bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0.scope. Jul 2 07:48:41.293157 env[1195]: time="2024-07-02T07:48:41.293096957Z" level=info msg="StartContainer for \"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0\" returns successfully" Jul 2 07:48:41.312805 systemd[1]: cri-containerd-bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0.scope: Deactivated successfully. Jul 2 07:48:42.249986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0-rootfs.mount: Deactivated successfully. Jul 2 07:48:42.505293 systemd[1]: Started sshd@5-10.0.0.87:22-10.0.0.1:57462.service. Jul 2 07:48:42.595660 kubelet[2003]: E0702 07:48:42.595635 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:42.636355 sshd[2435]: Accepted publickey for core from 10.0.0.1 port 57462 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:48:42.655369 sshd[2435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:42.658838 systemd-logind[1189]: New session 6 of user core. Jul 2 07:48:42.659604 systemd[1]: Started session-6.scope. Jul 2 07:48:42.686814 env[1195]: time="2024-07-02T07:48:42.686737447Z" level=info msg="shim disconnected" id=bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0 Jul 2 07:48:42.686814 env[1195]: time="2024-07-02T07:48:42.686808912Z" level=warning msg="cleaning up after shim disconnected" id=bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0 namespace=k8s.io Jul 2 07:48:42.686814 env[1195]: time="2024-07-02T07:48:42.686818620Z" level=info msg="cleaning up dead shim" Jul 2 07:48:42.695835 env[1195]: time="2024-07-02T07:48:42.695787919Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2438 runtime=io.containerd.runc.v2\n" Jul 2 07:48:42.770847 sshd[2435]: pam_unix(sshd:session): session closed for user core Jul 2 07:48:42.773344 systemd[1]: sshd@5-10.0.0.87:22-10.0.0.1:57462.service: Deactivated successfully. Jul 2 07:48:42.774263 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:48:42.774899 systemd-logind[1189]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:48:42.775776 systemd-logind[1189]: Removed session 6. Jul 2 07:48:43.558596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882842357.mount: Deactivated successfully. Jul 2 07:48:43.600987 kubelet[2003]: E0702 07:48:43.600700 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:43.603993 env[1195]: time="2024-07-02T07:48:43.603938128Z" level=info msg="CreateContainer within sandbox \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:48:43.617793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622954282.mount: Deactivated successfully. Jul 2 07:48:43.621011 env[1195]: time="2024-07-02T07:48:43.620892875Z" level=info msg="CreateContainer within sandbox \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335\"" Jul 2 07:48:43.621681 env[1195]: time="2024-07-02T07:48:43.621640703Z" level=info msg="StartContainer for \"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335\"" Jul 2 07:48:43.638244 systemd[1]: Started cri-containerd-f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335.scope. Jul 2 07:48:43.668127 env[1195]: time="2024-07-02T07:48:43.668080664Z" level=info msg="StartContainer for \"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335\" returns successfully" Jul 2 07:48:43.676898 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:48:43.677113 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:48:43.677284 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:48:43.678641 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:48:43.681028 systemd[1]: cri-containerd-f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335.scope: Deactivated successfully. Jul 2 07:48:43.685584 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:48:43.707687 env[1195]: time="2024-07-02T07:48:43.707635326Z" level=info msg="shim disconnected" id=f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335 Jul 2 07:48:43.707687 env[1195]: time="2024-07-02T07:48:43.707688386Z" level=warning msg="cleaning up after shim disconnected" id=f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335 namespace=k8s.io Jul 2 07:48:43.708078 env[1195]: time="2024-07-02T07:48:43.707696972Z" level=info msg="cleaning up dead shim" Jul 2 07:48:43.713505 env[1195]: time="2024-07-02T07:48:43.713464459Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2511 runtime=io.containerd.runc.v2\n" Jul 2 07:48:44.272766 env[1195]: time="2024-07-02T07:48:44.272703124Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:44.274741 env[1195]: time="2024-07-02T07:48:44.274667561Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:44.276252 env[1195]: time="2024-07-02T07:48:44.276196719Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:44.276641 env[1195]: time="2024-07-02T07:48:44.276604536Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:48:44.278251 env[1195]: time="2024-07-02T07:48:44.278223563Z" level=info msg="CreateContainer within sandbox \"fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:48:44.288900 env[1195]: time="2024-07-02T07:48:44.288851706Z" level=info msg="CreateContainer within sandbox \"fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\"" Jul 2 07:48:44.289340 env[1195]: time="2024-07-02T07:48:44.289302244Z" level=info msg="StartContainer for \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\"" Jul 2 07:48:44.301688 systemd[1]: Started cri-containerd-0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc.scope. Jul 2 07:48:44.331595 env[1195]: time="2024-07-02T07:48:44.331531902Z" level=info msg="StartContainer for \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\" returns successfully" Jul 2 07:48:44.556253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335-rootfs.mount: Deactivated successfully. Jul 2 07:48:44.602559 kubelet[2003]: E0702 07:48:44.602243 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:44.604931 env[1195]: time="2024-07-02T07:48:44.604637987Z" level=info msg="CreateContainer within sandbox \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:48:44.605388 kubelet[2003]: E0702 07:48:44.605369 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:44.620383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2580743444.mount: Deactivated successfully. Jul 2 07:48:44.627899 env[1195]: time="2024-07-02T07:48:44.627847207Z" level=info msg="CreateContainer within sandbox \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227\"" Jul 2 07:48:44.628498 env[1195]: time="2024-07-02T07:48:44.628453578Z" level=info msg="StartContainer for \"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227\"" Jul 2 07:48:44.651343 systemd[1]: Started cri-containerd-a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227.scope. Jul 2 07:48:44.678987 env[1195]: time="2024-07-02T07:48:44.678937461Z" level=info msg="StartContainer for \"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227\" returns successfully" Jul 2 07:48:44.679929 systemd[1]: cri-containerd-a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227.scope: Deactivated successfully. Jul 2 07:48:44.903852 env[1195]: time="2024-07-02T07:48:44.903717956Z" level=info msg="shim disconnected" id=a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227 Jul 2 07:48:44.903852 env[1195]: time="2024-07-02T07:48:44.903775123Z" level=warning msg="cleaning up after shim disconnected" id=a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227 namespace=k8s.io Jul 2 07:48:44.903852 env[1195]: time="2024-07-02T07:48:44.903788278Z" level=info msg="cleaning up dead shim" Jul 2 07:48:44.913402 env[1195]: time="2024-07-02T07:48:44.913344383Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2606 runtime=io.containerd.runc.v2\n" Jul 2 07:48:45.555340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227-rootfs.mount: Deactivated successfully. Jul 2 07:48:45.608591 kubelet[2003]: E0702 07:48:45.608552 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:45.608977 kubelet[2003]: E0702 07:48:45.608620 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:45.610399 env[1195]: time="2024-07-02T07:48:45.610338376Z" level=info msg="CreateContainer within sandbox \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:48:45.624577 kubelet[2003]: I0702 07:48:45.621104 2003 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-ch4dp" podStartSLOduration=2.385592287 podCreationTimestamp="2024-07-02 07:48:29 +0000 UTC" firstStartedPulling="2024-07-02 07:48:30.041323826 +0000 UTC m=+14.150378272" lastFinishedPulling="2024-07-02 07:48:44.276793933 +0000 UTC m=+28.385848389" observedRunningTime="2024-07-02 07:48:44.626944839 +0000 UTC m=+28.735999305" watchObservedRunningTime="2024-07-02 07:48:45.621062404 +0000 UTC m=+29.730116860" Jul 2 07:48:45.626238 env[1195]: time="2024-07-02T07:48:45.626197236Z" level=info msg="CreateContainer within sandbox \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9\"" Jul 2 07:48:45.626743 env[1195]: time="2024-07-02T07:48:45.626715451Z" level=info msg="StartContainer for \"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9\"" Jul 2 07:48:45.641774 systemd[1]: Started cri-containerd-029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9.scope. Jul 2 07:48:45.661877 env[1195]: time="2024-07-02T07:48:45.661810206Z" level=info msg="StartContainer for \"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9\" returns successfully" Jul 2 07:48:45.662083 systemd[1]: cri-containerd-029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9.scope: Deactivated successfully. Jul 2 07:48:45.683572 env[1195]: time="2024-07-02T07:48:45.683500018Z" level=info msg="shim disconnected" id=029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9 Jul 2 07:48:45.683572 env[1195]: time="2024-07-02T07:48:45.683570120Z" level=warning msg="cleaning up after shim disconnected" id=029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9 namespace=k8s.io Jul 2 07:48:45.683831 env[1195]: time="2024-07-02T07:48:45.683584016Z" level=info msg="cleaning up dead shim" Jul 2 07:48:45.689530 env[1195]: time="2024-07-02T07:48:45.689494898Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2660 runtime=io.containerd.runc.v2\n" Jul 2 07:48:46.555382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9-rootfs.mount: Deactivated successfully. Jul 2 07:48:46.613089 kubelet[2003]: E0702 07:48:46.612480 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:46.618001 env[1195]: time="2024-07-02T07:48:46.615012368Z" level=info msg="CreateContainer within sandbox \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:48:46.630280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2018196753.mount: Deactivated successfully. Jul 2 07:48:46.630979 env[1195]: time="2024-07-02T07:48:46.630924612Z" level=info msg="CreateContainer within sandbox \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\"" Jul 2 07:48:46.631461 env[1195]: time="2024-07-02T07:48:46.631404174Z" level=info msg="StartContainer for \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\"" Jul 2 07:48:46.646453 systemd[1]: Started cri-containerd-49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103.scope. Jul 2 07:48:46.671162 env[1195]: time="2024-07-02T07:48:46.671113677Z" level=info msg="StartContainer for \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\" returns successfully" Jul 2 07:48:46.813783 kubelet[2003]: I0702 07:48:46.813657 2003 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 07:48:46.839637 kubelet[2003]: I0702 07:48:46.839589 2003 topology_manager.go:215] "Topology Admit Handler" podUID="585756d3-dc19-47dd-99eb-12f299955ebd" podNamespace="kube-system" podName="coredns-5dd5756b68-gg4n5" Jul 2 07:48:46.841948 kubelet[2003]: W0702 07:48:46.841906 2003 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 07:48:46.842046 kubelet[2003]: E0702 07:48:46.841968 2003 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 07:48:46.842963 kubelet[2003]: I0702 07:48:46.842922 2003 topology_manager.go:215] "Topology Admit Handler" podUID="db47fee3-923b-4caf-92b7-e065af512b5f" podNamespace="kube-system" podName="coredns-5dd5756b68-fgcq4" Jul 2 07:48:46.847065 systemd[1]: Created slice kubepods-burstable-pod585756d3_dc19_47dd_99eb_12f299955ebd.slice. Jul 2 07:48:46.869094 systemd[1]: Created slice kubepods-burstable-poddb47fee3_923b_4caf_92b7_e065af512b5f.slice. Jul 2 07:48:47.022323 kubelet[2003]: I0702 07:48:47.022291 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qczqj\" (UniqueName: \"kubernetes.io/projected/db47fee3-923b-4caf-92b7-e065af512b5f-kube-api-access-qczqj\") pod \"coredns-5dd5756b68-fgcq4\" (UID: \"db47fee3-923b-4caf-92b7-e065af512b5f\") " pod="kube-system/coredns-5dd5756b68-fgcq4" Jul 2 07:48:47.022451 kubelet[2003]: I0702 07:48:47.022343 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/585756d3-dc19-47dd-99eb-12f299955ebd-config-volume\") pod \"coredns-5dd5756b68-gg4n5\" (UID: \"585756d3-dc19-47dd-99eb-12f299955ebd\") " pod="kube-system/coredns-5dd5756b68-gg4n5" Jul 2 07:48:47.022451 kubelet[2003]: I0702 07:48:47.022371 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wndsd\" (UniqueName: \"kubernetes.io/projected/585756d3-dc19-47dd-99eb-12f299955ebd-kube-api-access-wndsd\") pod \"coredns-5dd5756b68-gg4n5\" (UID: \"585756d3-dc19-47dd-99eb-12f299955ebd\") " pod="kube-system/coredns-5dd5756b68-gg4n5" Jul 2 07:48:47.022451 kubelet[2003]: I0702 07:48:47.022395 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db47fee3-923b-4caf-92b7-e065af512b5f-config-volume\") pod \"coredns-5dd5756b68-fgcq4\" (UID: \"db47fee3-923b-4caf-92b7-e065af512b5f\") " pod="kube-system/coredns-5dd5756b68-fgcq4" Jul 2 07:48:47.616795 kubelet[2003]: E0702 07:48:47.616760 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:47.761786 kubelet[2003]: E0702 07:48:47.761745 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:47.762400 env[1195]: time="2024-07-02T07:48:47.762355089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-gg4n5,Uid:585756d3-dc19-47dd-99eb-12f299955ebd,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:47.773073 kubelet[2003]: E0702 07:48:47.773001 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:47.773371 env[1195]: time="2024-07-02T07:48:47.773339289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fgcq4,Uid:db47fee3-923b-4caf-92b7-e065af512b5f,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:47.774545 systemd[1]: Started sshd@6-10.0.0.87:22-10.0.0.1:57468.service. Jul 2 07:48:47.805979 sshd[2811]: Accepted publickey for core from 10.0.0.1 port 57468 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:48:47.807197 sshd[2811]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:47.810443 systemd-logind[1189]: New session 7 of user core. Jul 2 07:48:47.811415 systemd[1]: Started session-7.scope. Jul 2 07:48:47.918396 sshd[2811]: pam_unix(sshd:session): session closed for user core Jul 2 07:48:47.920231 systemd[1]: sshd@6-10.0.0.87:22-10.0.0.1:57468.service: Deactivated successfully. Jul 2 07:48:47.920845 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:48:47.921276 systemd-logind[1189]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:48:47.921919 systemd-logind[1189]: Removed session 7. Jul 2 07:48:48.619492 kubelet[2003]: E0702 07:48:48.619450 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:48.636607 systemd-networkd[1019]: cilium_host: Link UP Jul 2 07:48:48.636728 systemd-networkd[1019]: cilium_net: Link UP Jul 2 07:48:48.636730 systemd-networkd[1019]: cilium_net: Gained carrier Jul 2 07:48:48.636847 systemd-networkd[1019]: cilium_host: Gained carrier Jul 2 07:48:48.642101 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:48:48.640355 systemd-networkd[1019]: cilium_host: Gained IPv6LL Jul 2 07:48:48.710529 systemd-networkd[1019]: cilium_vxlan: Link UP Jul 2 07:48:48.710539 systemd-networkd[1019]: cilium_vxlan: Gained carrier Jul 2 07:48:48.773081 systemd-networkd[1019]: cilium_net: Gained IPv6LL Jul 2 07:48:48.886993 kernel: NET: Registered PF_ALG protocol family Jul 2 07:48:49.403082 systemd-networkd[1019]: lxc_health: Link UP Jul 2 07:48:49.411434 systemd-networkd[1019]: lxc_health: Gained carrier Jul 2 07:48:49.411981 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:48:49.621311 kubelet[2003]: E0702 07:48:49.621280 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:49.692795 kubelet[2003]: I0702 07:48:49.692490 2003 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wq942" podStartSLOduration=9.206847824 podCreationTimestamp="2024-07-02 07:48:29 +0000 UTC" firstStartedPulling="2024-07-02 07:48:29.747822801 +0000 UTC m=+13.856877257" lastFinishedPulling="2024-07-02 07:48:41.233427492 +0000 UTC m=+25.342481958" observedRunningTime="2024-07-02 07:48:47.701509748 +0000 UTC m=+31.810564234" watchObservedRunningTime="2024-07-02 07:48:49.692452525 +0000 UTC m=+33.801506981" Jul 2 07:48:49.840867 systemd-networkd[1019]: lxc3741b46c3ed4: Link UP Jul 2 07:48:49.855991 kernel: eth0: renamed from tmp0d10a Jul 2 07:48:49.863007 kernel: eth0: renamed from tmpcc1ca Jul 2 07:48:49.869992 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:48:49.870109 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc349abf57223a: link becomes ready Jul 2 07:48:49.869525 systemd-networkd[1019]: lxc349abf57223a: Link UP Jul 2 07:48:49.870064 systemd-networkd[1019]: lxc349abf57223a: Gained carrier Jul 2 07:48:49.873408 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3741b46c3ed4: link becomes ready Jul 2 07:48:49.872373 systemd-networkd[1019]: lxc3741b46c3ed4: Gained carrier Jul 2 07:48:50.181064 systemd-networkd[1019]: cilium_vxlan: Gained IPv6LL Jul 2 07:48:50.623622 kubelet[2003]: E0702 07:48:50.623590 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:51.269115 systemd-networkd[1019]: lxc3741b46c3ed4: Gained IPv6LL Jul 2 07:48:51.269423 systemd-networkd[1019]: lxc349abf57223a: Gained IPv6LL Jul 2 07:48:51.461187 systemd-networkd[1019]: lxc_health: Gained IPv6LL Jul 2 07:48:51.625017 kubelet[2003]: E0702 07:48:51.624898 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:52.627176 kubelet[2003]: E0702 07:48:52.627141 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:52.924077 systemd[1]: Started sshd@7-10.0.0.87:22-10.0.0.1:48750.service. Jul 2 07:48:53.012982 sshd[3234]: Accepted publickey for core from 10.0.0.1 port 48750 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:48:53.013050 sshd[3234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:53.017537 systemd[1]: Started session-8.scope. Jul 2 07:48:53.018882 systemd-logind[1189]: New session 8 of user core. Jul 2 07:48:53.135201 sshd[3234]: pam_unix(sshd:session): session closed for user core Jul 2 07:48:53.138008 systemd-logind[1189]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:48:53.138575 systemd[1]: sshd@7-10.0.0.87:22-10.0.0.1:48750.service: Deactivated successfully. Jul 2 07:48:53.139244 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:48:53.140032 systemd-logind[1189]: Removed session 8. Jul 2 07:48:53.195522 env[1195]: time="2024-07-02T07:48:53.195377646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:53.195522 env[1195]: time="2024-07-02T07:48:53.195420967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:53.195949 env[1195]: time="2024-07-02T07:48:53.195431827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:53.196165 env[1195]: time="2024-07-02T07:48:53.196115022Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc1ca078e8e1302306a1ac4ff253c6f952338b8017bc8bcc69db6cf212844d5a pid=3260 runtime=io.containerd.runc.v2 Jul 2 07:48:53.217248 systemd[1]: Started cri-containerd-cc1ca078e8e1302306a1ac4ff253c6f952338b8017bc8bcc69db6cf212844d5a.scope. Jul 2 07:48:53.226888 systemd-resolved[1137]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:48:53.247723 env[1195]: time="2024-07-02T07:48:53.247674447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-gg4n5,Uid:585756d3-dc19-47dd-99eb-12f299955ebd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc1ca078e8e1302306a1ac4ff253c6f952338b8017bc8bcc69db6cf212844d5a\"" Jul 2 07:48:53.248304 kubelet[2003]: E0702 07:48:53.248285 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:53.250091 env[1195]: time="2024-07-02T07:48:53.250050391Z" level=info msg="CreateContainer within sandbox \"cc1ca078e8e1302306a1ac4ff253c6f952338b8017bc8bcc69db6cf212844d5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:48:53.290153 env[1195]: time="2024-07-02T07:48:53.290069662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:53.290153 env[1195]: time="2024-07-02T07:48:53.290110558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:53.290153 env[1195]: time="2024-07-02T07:48:53.290121348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:53.290425 env[1195]: time="2024-07-02T07:48:53.290250060Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d10a05336291314c8cba2299b8b18d7be9f84e0e50446a24245d403fd97f32b pid=3298 runtime=io.containerd.runc.v2 Jul 2 07:48:53.307567 systemd[1]: run-containerd-runc-k8s.io-0d10a05336291314c8cba2299b8b18d7be9f84e0e50446a24245d403fd97f32b-runc.occw4m.mount: Deactivated successfully. Jul 2 07:48:53.308998 systemd[1]: Started cri-containerd-0d10a05336291314c8cba2299b8b18d7be9f84e0e50446a24245d403fd97f32b.scope. Jul 2 07:48:53.318884 systemd-resolved[1137]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:48:53.338120 env[1195]: time="2024-07-02T07:48:53.338054359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fgcq4,Uid:db47fee3-923b-4caf-92b7-e065af512b5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d10a05336291314c8cba2299b8b18d7be9f84e0e50446a24245d403fd97f32b\"" Jul 2 07:48:53.338976 kubelet[2003]: E0702 07:48:53.338932 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:53.340718 env[1195]: time="2024-07-02T07:48:53.340688889Z" level=info msg="CreateContainer within sandbox \"0d10a05336291314c8cba2299b8b18d7be9f84e0e50446a24245d403fd97f32b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:48:53.498870 env[1195]: time="2024-07-02T07:48:53.498816867Z" level=info msg="CreateContainer within sandbox \"0d10a05336291314c8cba2299b8b18d7be9f84e0e50446a24245d403fd97f32b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aef270bf5c0c4b8f58a2285a8d224ea4faefbe21811d6c8fca5db750b045b13c\"" Jul 2 07:48:53.499535 env[1195]: time="2024-07-02T07:48:53.499457130Z" level=info msg="StartContainer for \"aef270bf5c0c4b8f58a2285a8d224ea4faefbe21811d6c8fca5db750b045b13c\"" Jul 2 07:48:53.499729 env[1195]: time="2024-07-02T07:48:53.499693224Z" level=info msg="CreateContainer within sandbox \"cc1ca078e8e1302306a1ac4ff253c6f952338b8017bc8bcc69db6cf212844d5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edb73d4d4987a89fcace7b309fa259af864d9e1c03eb2fea20ed1a568e2dbfd0\"" Jul 2 07:48:53.500245 env[1195]: time="2024-07-02T07:48:53.500187583Z" level=info msg="StartContainer for \"edb73d4d4987a89fcace7b309fa259af864d9e1c03eb2fea20ed1a568e2dbfd0\"" Jul 2 07:48:53.513501 systemd[1]: Started cri-containerd-aef270bf5c0c4b8f58a2285a8d224ea4faefbe21811d6c8fca5db750b045b13c.scope. Jul 2 07:48:53.521599 systemd[1]: Started cri-containerd-edb73d4d4987a89fcace7b309fa259af864d9e1c03eb2fea20ed1a568e2dbfd0.scope. Jul 2 07:48:53.538207 env[1195]: time="2024-07-02T07:48:53.538138747Z" level=info msg="StartContainer for \"aef270bf5c0c4b8f58a2285a8d224ea4faefbe21811d6c8fca5db750b045b13c\" returns successfully" Jul 2 07:48:53.552170 env[1195]: time="2024-07-02T07:48:53.552118626Z" level=info msg="StartContainer for \"edb73d4d4987a89fcace7b309fa259af864d9e1c03eb2fea20ed1a568e2dbfd0\" returns successfully" Jul 2 07:48:53.630183 kubelet[2003]: E0702 07:48:53.630149 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:53.631226 kubelet[2003]: E0702 07:48:53.631214 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:53.642947 kubelet[2003]: I0702 07:48:53.642921 2003 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fgcq4" podStartSLOduration=24.642889603 podCreationTimestamp="2024-07-02 07:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:48:53.64221773 +0000 UTC m=+37.751272186" watchObservedRunningTime="2024-07-02 07:48:53.642889603 +0000 UTC m=+37.751944059" Jul 2 07:48:53.679155 kubelet[2003]: I0702 07:48:53.679095 2003 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gg4n5" podStartSLOduration=24.679047667 podCreationTimestamp="2024-07-02 07:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:48:53.678828505 +0000 UTC m=+37.787882961" watchObservedRunningTime="2024-07-02 07:48:53.679047667 +0000 UTC m=+37.788102123" Jul 2 07:48:54.633357 kubelet[2003]: E0702 07:48:54.633333 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:54.633745 kubelet[2003]: E0702 07:48:54.633562 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:55.635569 kubelet[2003]: E0702 07:48:55.635531 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:55.635950 kubelet[2003]: E0702 07:48:55.635617 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:58.138079 systemd[1]: Started sshd@8-10.0.0.87:22-10.0.0.1:48766.service. Jul 2 07:48:58.171473 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 48766 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:48:58.173226 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:58.176792 systemd-logind[1189]: New session 9 of user core. Jul 2 07:48:58.177920 systemd[1]: Started session-9.scope. Jul 2 07:48:58.291885 sshd[3423]: pam_unix(sshd:session): session closed for user core Jul 2 07:48:58.294770 systemd[1]: sshd@8-10.0.0.87:22-10.0.0.1:48766.service: Deactivated successfully. Jul 2 07:48:58.295335 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:48:58.295937 systemd-logind[1189]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:48:58.297055 systemd[1]: Started sshd@9-10.0.0.87:22-10.0.0.1:48772.service. Jul 2 07:48:58.297928 systemd-logind[1189]: Removed session 9. Jul 2 07:48:58.328869 sshd[3438]: Accepted publickey for core from 10.0.0.1 port 48772 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:48:58.329835 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:58.333226 systemd-logind[1189]: New session 10 of user core. Jul 2 07:48:58.334240 systemd[1]: Started session-10.scope. Jul 2 07:48:58.986010 sshd[3438]: pam_unix(sshd:session): session closed for user core Jul 2 07:48:58.989740 systemd[1]: Started sshd@10-10.0.0.87:22-10.0.0.1:48780.service. Jul 2 07:48:58.993097 systemd[1]: sshd@9-10.0.0.87:22-10.0.0.1:48772.service: Deactivated successfully. Jul 2 07:48:58.994102 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:48:58.995161 systemd-logind[1189]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:48:58.996579 systemd-logind[1189]: Removed session 10. Jul 2 07:48:59.029077 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 48780 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:48:59.030474 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:59.034387 systemd-logind[1189]: New session 11 of user core. Jul 2 07:48:59.035138 systemd[1]: Started session-11.scope. Jul 2 07:48:59.145126 sshd[3448]: pam_unix(sshd:session): session closed for user core Jul 2 07:48:59.147622 systemd[1]: sshd@10-10.0.0.87:22-10.0.0.1:48780.service: Deactivated successfully. Jul 2 07:48:59.148307 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:48:59.148790 systemd-logind[1189]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:48:59.149474 systemd-logind[1189]: Removed session 11. Jul 2 07:49:04.149140 systemd[1]: Started sshd@11-10.0.0.87:22-10.0.0.1:34238.service. Jul 2 07:49:04.180462 sshd[3467]: Accepted publickey for core from 10.0.0.1 port 34238 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:04.181612 sshd[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:04.185208 systemd-logind[1189]: New session 12 of user core. Jul 2 07:49:04.186175 systemd[1]: Started session-12.scope. Jul 2 07:49:04.324577 sshd[3467]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:04.326696 systemd[1]: sshd@11-10.0.0.87:22-10.0.0.1:34238.service: Deactivated successfully. Jul 2 07:49:04.327400 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:49:04.328028 systemd-logind[1189]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:49:04.328853 systemd-logind[1189]: Removed session 12. Jul 2 07:49:09.329035 systemd[1]: Started sshd@12-10.0.0.87:22-10.0.0.1:34248.service. Jul 2 07:49:09.359469 sshd[3481]: Accepted publickey for core from 10.0.0.1 port 34248 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:09.360443 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:09.363481 systemd-logind[1189]: New session 13 of user core. Jul 2 07:49:09.364227 systemd[1]: Started session-13.scope. Jul 2 07:49:09.458630 sshd[3481]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:09.460661 systemd[1]: sshd@12-10.0.0.87:22-10.0.0.1:34248.service: Deactivated successfully. Jul 2 07:49:09.461362 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:49:09.462142 systemd-logind[1189]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:49:09.462808 systemd-logind[1189]: Removed session 13. Jul 2 07:49:14.462770 systemd[1]: Started sshd@13-10.0.0.87:22-10.0.0.1:58366.service. Jul 2 07:49:14.493473 sshd[3494]: Accepted publickey for core from 10.0.0.1 port 58366 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:14.494539 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:14.498196 systemd-logind[1189]: New session 14 of user core. Jul 2 07:49:14.499250 systemd[1]: Started session-14.scope. Jul 2 07:49:14.600090 sshd[3494]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:14.602989 systemd[1]: sshd@13-10.0.0.87:22-10.0.0.1:58366.service: Deactivated successfully. Jul 2 07:49:14.603620 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:49:14.604300 systemd-logind[1189]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:49:14.605559 systemd[1]: Started sshd@14-10.0.0.87:22-10.0.0.1:58376.service. Jul 2 07:49:14.606397 systemd-logind[1189]: Removed session 14. Jul 2 07:49:14.636502 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 58376 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:14.637543 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:14.640730 systemd-logind[1189]: New session 15 of user core. Jul 2 07:49:14.641714 systemd[1]: Started session-15.scope. Jul 2 07:49:14.814726 sshd[3508]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:14.817459 systemd[1]: sshd@14-10.0.0.87:22-10.0.0.1:58376.service: Deactivated successfully. Jul 2 07:49:14.817931 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:49:14.818511 systemd-logind[1189]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:49:14.819395 systemd[1]: Started sshd@15-10.0.0.87:22-10.0.0.1:58392.service. Jul 2 07:49:14.820033 systemd-logind[1189]: Removed session 15. Jul 2 07:49:14.855403 sshd[3519]: Accepted publickey for core from 10.0.0.1 port 58392 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:14.856554 sshd[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:14.859598 systemd-logind[1189]: New session 16 of user core. Jul 2 07:49:14.860357 systemd[1]: Started session-16.scope. Jul 2 07:49:15.581656 sshd[3519]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:15.584177 systemd[1]: sshd@15-10.0.0.87:22-10.0.0.1:58392.service: Deactivated successfully. Jul 2 07:49:15.584654 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:49:15.585856 systemd[1]: Started sshd@16-10.0.0.87:22-10.0.0.1:58408.service. Jul 2 07:49:15.587674 systemd-logind[1189]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:49:15.588586 systemd-logind[1189]: Removed session 16. Jul 2 07:49:15.616926 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 58408 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:15.618157 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:15.621140 systemd-logind[1189]: New session 17 of user core. Jul 2 07:49:15.621820 systemd[1]: Started session-17.scope. Jul 2 07:49:15.853583 sshd[3539]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:15.856459 systemd[1]: Started sshd@17-10.0.0.87:22-10.0.0.1:58424.service. Jul 2 07:49:15.858018 systemd[1]: sshd@16-10.0.0.87:22-10.0.0.1:58408.service: Deactivated successfully. Jul 2 07:49:15.858520 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:49:15.859131 systemd-logind[1189]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:49:15.859715 systemd-logind[1189]: Removed session 17. Jul 2 07:49:15.888421 sshd[3549]: Accepted publickey for core from 10.0.0.1 port 58424 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:15.889286 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:15.892195 systemd-logind[1189]: New session 18 of user core. Jul 2 07:49:15.892941 systemd[1]: Started session-18.scope. Jul 2 07:49:15.994377 sshd[3549]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:15.996488 systemd[1]: sshd@17-10.0.0.87:22-10.0.0.1:58424.service: Deactivated successfully. Jul 2 07:49:15.997165 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:49:15.997685 systemd-logind[1189]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:49:15.998540 systemd-logind[1189]: Removed session 18. Jul 2 07:49:20.999081 systemd[1]: Started sshd@18-10.0.0.87:22-10.0.0.1:58430.service. Jul 2 07:49:21.029850 sshd[3566]: Accepted publickey for core from 10.0.0.1 port 58430 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:21.030908 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:21.034013 systemd-logind[1189]: New session 19 of user core. Jul 2 07:49:21.034829 systemd[1]: Started session-19.scope. Jul 2 07:49:21.132130 sshd[3566]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:21.134310 systemd[1]: sshd@18-10.0.0.87:22-10.0.0.1:58430.service: Deactivated successfully. Jul 2 07:49:21.135114 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:49:21.135776 systemd-logind[1189]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:49:21.136574 systemd-logind[1189]: Removed session 19. Jul 2 07:49:26.136823 systemd[1]: Started sshd@19-10.0.0.87:22-10.0.0.1:52796.service. Jul 2 07:49:26.168134 sshd[3582]: Accepted publickey for core from 10.0.0.1 port 52796 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:26.169316 sshd[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:26.172680 systemd-logind[1189]: New session 20 of user core. Jul 2 07:49:26.173647 systemd[1]: Started session-20.scope. Jul 2 07:49:26.270467 sshd[3582]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:26.273071 systemd[1]: sshd@19-10.0.0.87:22-10.0.0.1:52796.service: Deactivated successfully. Jul 2 07:49:26.273813 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:49:26.274392 systemd-logind[1189]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:49:26.275014 systemd-logind[1189]: Removed session 20. Jul 2 07:49:31.274713 systemd[1]: Started sshd@20-10.0.0.87:22-10.0.0.1:52800.service. Jul 2 07:49:31.305801 sshd[3597]: Accepted publickey for core from 10.0.0.1 port 52800 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:31.306673 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:31.310001 systemd-logind[1189]: New session 21 of user core. Jul 2 07:49:31.310713 systemd[1]: Started session-21.scope. Jul 2 07:49:31.412388 sshd[3597]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:31.414874 systemd[1]: sshd@20-10.0.0.87:22-10.0.0.1:52800.service: Deactivated successfully. Jul 2 07:49:31.415539 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:49:31.416027 systemd-logind[1189]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:49:31.416649 systemd-logind[1189]: Removed session 21. Jul 2 07:49:36.416670 systemd[1]: Started sshd@21-10.0.0.87:22-10.0.0.1:52356.service. Jul 2 07:49:36.447068 sshd[3611]: Accepted publickey for core from 10.0.0.1 port 52356 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:36.447877 sshd[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:36.450862 systemd-logind[1189]: New session 22 of user core. Jul 2 07:49:36.451565 systemd[1]: Started session-22.scope. Jul 2 07:49:36.546719 sshd[3611]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:36.549329 systemd[1]: sshd@21-10.0.0.87:22-10.0.0.1:52356.service: Deactivated successfully. Jul 2 07:49:36.549898 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:49:36.550589 systemd-logind[1189]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:49:36.551523 systemd[1]: Started sshd@22-10.0.0.87:22-10.0.0.1:52364.service. Jul 2 07:49:36.552290 systemd-logind[1189]: Removed session 22. Jul 2 07:49:36.581703 sshd[3624]: Accepted publickey for core from 10.0.0.1 port 52364 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:36.582665 sshd[3624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:36.585642 systemd-logind[1189]: New session 23 of user core. Jul 2 07:49:36.586630 systemd[1]: Started session-23.scope. Jul 2 07:49:38.481985 env[1195]: time="2024-07-02T07:49:38.481781812Z" level=info msg="StopContainer for \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\" with timeout 30 (s)" Jul 2 07:49:38.482514 env[1195]: time="2024-07-02T07:49:38.482079209Z" level=info msg="Stop container \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\" with signal terminated" Jul 2 07:49:38.491691 systemd[1]: cri-containerd-0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc.scope: Deactivated successfully. Jul 2 07:49:38.498186 env[1195]: time="2024-07-02T07:49:38.498140295Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:49:38.503020 env[1195]: time="2024-07-02T07:49:38.502949025Z" level=info msg="StopContainer for \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\" with timeout 2 (s)" Jul 2 07:49:38.503200 env[1195]: time="2024-07-02T07:49:38.503179946Z" level=info msg="Stop container \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\" with signal terminated" Jul 2 07:49:38.508344 systemd-networkd[1019]: lxc_health: Link DOWN Jul 2 07:49:38.508353 systemd-networkd[1019]: lxc_health: Lost carrier Jul 2 07:49:38.511594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc-rootfs.mount: Deactivated successfully. Jul 2 07:49:38.522128 env[1195]: time="2024-07-02T07:49:38.522090522Z" level=info msg="shim disconnected" id=0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc Jul 2 07:49:38.522285 env[1195]: time="2024-07-02T07:49:38.522247553Z" level=warning msg="cleaning up after shim disconnected" id=0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc namespace=k8s.io Jul 2 07:49:38.522285 env[1195]: time="2024-07-02T07:49:38.522267961Z" level=info msg="cleaning up dead shim" Jul 2 07:49:38.527880 env[1195]: time="2024-07-02T07:49:38.527840391Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3677 runtime=io.containerd.runc.v2\n" Jul 2 07:49:38.530609 env[1195]: time="2024-07-02T07:49:38.530580092Z" level=info msg="StopContainer for \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\" returns successfully" Jul 2 07:49:38.531163 env[1195]: time="2024-07-02T07:49:38.531143097Z" level=info msg="StopPodSandbox for \"fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e\"" Jul 2 07:49:38.531285 env[1195]: time="2024-07-02T07:49:38.531259609Z" level=info msg="Container to stop \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:38.532949 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e-shm.mount: Deactivated successfully. Jul 2 07:49:38.536484 systemd[1]: cri-containerd-49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103.scope: Deactivated successfully. Jul 2 07:49:38.536703 systemd[1]: cri-containerd-49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103.scope: Consumed 6.059s CPU time. Jul 2 07:49:38.539576 systemd[1]: cri-containerd-fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e.scope: Deactivated successfully. Jul 2 07:49:38.551356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103-rootfs.mount: Deactivated successfully. Jul 2 07:49:38.555862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e-rootfs.mount: Deactivated successfully. Jul 2 07:49:38.557798 env[1195]: time="2024-07-02T07:49:38.557734968Z" level=info msg="shim disconnected" id=49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103 Jul 2 07:49:38.557798 env[1195]: time="2024-07-02T07:49:38.557786707Z" level=warning msg="cleaning up after shim disconnected" id=49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103 namespace=k8s.io Jul 2 07:49:38.557798 env[1195]: time="2024-07-02T07:49:38.557794963Z" level=info msg="cleaning up dead shim" Jul 2 07:49:38.557916 env[1195]: time="2024-07-02T07:49:38.557803849Z" level=info msg="shim disconnected" id=fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e Jul 2 07:49:38.557916 env[1195]: time="2024-07-02T07:49:38.557832484Z" level=warning msg="cleaning up after shim disconnected" id=fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e namespace=k8s.io Jul 2 07:49:38.557916 env[1195]: time="2024-07-02T07:49:38.557842774Z" level=info msg="cleaning up dead shim" Jul 2 07:49:38.563774 env[1195]: time="2024-07-02T07:49:38.563731187Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3723 runtime=io.containerd.runc.v2\n" Jul 2 07:49:38.564071 env[1195]: time="2024-07-02T07:49:38.564042390Z" level=info msg="TearDown network for sandbox \"fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e\" successfully" Jul 2 07:49:38.564120 env[1195]: time="2024-07-02T07:49:38.564070855Z" level=info msg="StopPodSandbox for \"fb113c25038b6e210473f3a821bc2fc7d8fb4e23f64f550cc3f7c8df30557e9e\" returns successfully" Jul 2 07:49:38.565254 env[1195]: time="2024-07-02T07:49:38.565233024Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3722 runtime=io.containerd.runc.v2\n" Jul 2 07:49:38.567646 env[1195]: time="2024-07-02T07:49:38.567622596Z" level=info msg="StopContainer for \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\" returns successfully" Jul 2 07:49:38.567998 env[1195]: time="2024-07-02T07:49:38.567975009Z" level=info msg="StopPodSandbox for \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\"" Jul 2 07:49:38.568056 env[1195]: time="2024-07-02T07:49:38.568015566Z" level=info msg="Container to stop \"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:38.568056 env[1195]: time="2024-07-02T07:49:38.568027690Z" level=info msg="Container to stop \"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:38.568056 env[1195]: time="2024-07-02T07:49:38.568048600Z" level=info msg="Container to stop \"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:38.568178 env[1195]: time="2024-07-02T07:49:38.568056986Z" level=info msg="Container to stop \"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:38.568178 env[1195]: time="2024-07-02T07:49:38.568065141Z" level=info msg="Container to stop \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:38.572757 systemd[1]: cri-containerd-6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875.scope: Deactivated successfully. Jul 2 07:49:38.573548 kubelet[2003]: I0702 07:49:38.573518 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxrm9\" (UniqueName: \"kubernetes.io/projected/cf8cace4-87f3-4c58-8cf8-fa93971be467-kube-api-access-kxrm9\") pod \"cf8cace4-87f3-4c58-8cf8-fa93971be467\" (UID: \"cf8cace4-87f3-4c58-8cf8-fa93971be467\") " Jul 2 07:49:38.573838 kubelet[2003]: I0702 07:49:38.573556 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf8cace4-87f3-4c58-8cf8-fa93971be467-cilium-config-path\") pod \"cf8cace4-87f3-4c58-8cf8-fa93971be467\" (UID: \"cf8cace4-87f3-4c58-8cf8-fa93971be467\") " Jul 2 07:49:38.576696 kubelet[2003]: I0702 07:49:38.576659 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf8cace4-87f3-4c58-8cf8-fa93971be467-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf8cace4-87f3-4c58-8cf8-fa93971be467" (UID: "cf8cace4-87f3-4c58-8cf8-fa93971be467"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:49:38.576833 kubelet[2003]: I0702 07:49:38.576812 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf8cace4-87f3-4c58-8cf8-fa93971be467-kube-api-access-kxrm9" (OuterVolumeSpecName: "kube-api-access-kxrm9") pod "cf8cace4-87f3-4c58-8cf8-fa93971be467" (UID: "cf8cace4-87f3-4c58-8cf8-fa93971be467"). InnerVolumeSpecName "kube-api-access-kxrm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:49:38.595866 env[1195]: time="2024-07-02T07:49:38.595826015Z" level=info msg="shim disconnected" id=6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875 Jul 2 07:49:38.595866 env[1195]: time="2024-07-02T07:49:38.595865520Z" level=warning msg="cleaning up after shim disconnected" id=6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875 namespace=k8s.io Jul 2 07:49:38.596136 env[1195]: time="2024-07-02T07:49:38.595872974Z" level=info msg="cleaning up dead shim" Jul 2 07:49:38.601507 env[1195]: time="2024-07-02T07:49:38.601471304Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3766 runtime=io.containerd.runc.v2\n" Jul 2 07:49:38.601749 env[1195]: time="2024-07-02T07:49:38.601730188Z" level=info msg="TearDown network for sandbox \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" successfully" Jul 2 07:49:38.601795 env[1195]: time="2024-07-02T07:49:38.601749705Z" level=info msg="StopPodSandbox for \"6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875\" returns successfully" Jul 2 07:49:38.674375 kubelet[2003]: I0702 07:49:38.674327 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-host-proc-sys-kernel\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674375 kubelet[2003]: I0702 07:49:38.674371 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-hostproc\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674375 kubelet[2003]: I0702 07:49:38.674392 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-host-proc-sys-net\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674647 kubelet[2003]: I0702 07:49:38.674407 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-xtables-lock\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674647 kubelet[2003]: I0702 07:49:38.674436 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-lib-modules\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674647 kubelet[2003]: I0702 07:49:38.674450 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-cgroup\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674647 kubelet[2003]: I0702 07:49:38.674474 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10c739fa-16a5-45f6-a430-ba984f83a9a0-clustermesh-secrets\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674647 kubelet[2003]: I0702 07:49:38.674491 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10c739fa-16a5-45f6-a430-ba984f83a9a0-hubble-tls\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674647 kubelet[2003]: I0702 07:49:38.674490 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-hostproc" (OuterVolumeSpecName: "hostproc") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.674786 kubelet[2003]: I0702 07:49:38.674537 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.674786 kubelet[2003]: I0702 07:49:38.674507 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-etc-cni-netd\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674786 kubelet[2003]: I0702 07:49:38.674563 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.674786 kubelet[2003]: I0702 07:49:38.674546 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.674786 kubelet[2003]: I0702 07:49:38.674577 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.674924 kubelet[2003]: I0702 07:49:38.674587 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-run\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674924 kubelet[2003]: I0702 07:49:38.674608 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-bpf-maps\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674924 kubelet[2003]: I0702 07:49:38.674614 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.674924 kubelet[2003]: I0702 07:49:38.674634 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ndlk\" (UniqueName: \"kubernetes.io/projected/10c739fa-16a5-45f6-a430-ba984f83a9a0-kube-api-access-8ndlk\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.674924 kubelet[2003]: I0702 07:49:38.674633 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.675059 kubelet[2003]: I0702 07:49:38.674651 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.675059 kubelet[2003]: I0702 07:49:38.674659 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-config-path\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.675059 kubelet[2003]: I0702 07:49:38.674675 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cni-path\") pod \"10c739fa-16a5-45f6-a430-ba984f83a9a0\" (UID: \"10c739fa-16a5-45f6-a430-ba984f83a9a0\") " Jul 2 07:49:38.675059 kubelet[2003]: I0702 07:49:38.674703 2003 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.675059 kubelet[2003]: I0702 07:49:38.674714 2003 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.675059 kubelet[2003]: I0702 07:49:38.674723 2003 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.675059 kubelet[2003]: I0702 07:49:38.674732 2003 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.675216 kubelet[2003]: I0702 07:49:38.674740 2003 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.675216 kubelet[2003]: I0702 07:49:38.674748 2003 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.675216 kubelet[2003]: I0702 07:49:38.674755 2003 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.675216 kubelet[2003]: I0702 07:49:38.674763 2003 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.675216 kubelet[2003]: I0702 07:49:38.674773 2003 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kxrm9\" (UniqueName: \"kubernetes.io/projected/cf8cace4-87f3-4c58-8cf8-fa93971be467-kube-api-access-kxrm9\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.675216 kubelet[2003]: I0702 07:49:38.674781 2003 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf8cace4-87f3-4c58-8cf8-fa93971be467-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.675216 kubelet[2003]: I0702 07:49:38.674796 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cni-path" (OuterVolumeSpecName: "cni-path") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.675371 kubelet[2003]: I0702 07:49:38.675021 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.677180 kubelet[2003]: I0702 07:49:38.677143 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:49:38.677180 kubelet[2003]: I0702 07:49:38.677162 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c739fa-16a5-45f6-a430-ba984f83a9a0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:49:38.677338 kubelet[2003]: I0702 07:49:38.677297 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c739fa-16a5-45f6-a430-ba984f83a9a0-kube-api-access-8ndlk" (OuterVolumeSpecName: "kube-api-access-8ndlk") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "kube-api-access-8ndlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:49:38.678337 kubelet[2003]: I0702 07:49:38.678308 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c739fa-16a5-45f6-a430-ba984f83a9a0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "10c739fa-16a5-45f6-a430-ba984f83a9a0" (UID: "10c739fa-16a5-45f6-a430-ba984f83a9a0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:49:38.715734 kubelet[2003]: I0702 07:49:38.715700 2003 scope.go:117] "RemoveContainer" containerID="49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103" Jul 2 07:49:38.717133 env[1195]: time="2024-07-02T07:49:38.716778358Z" level=info msg="RemoveContainer for \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\"" Jul 2 07:49:38.719028 systemd[1]: Removed slice kubepods-burstable-pod10c739fa_16a5_45f6_a430_ba984f83a9a0.slice. Jul 2 07:49:38.719111 systemd[1]: kubepods-burstable-pod10c739fa_16a5_45f6_a430_ba984f83a9a0.slice: Consumed 6.147s CPU time. Jul 2 07:49:38.721080 systemd[1]: Removed slice kubepods-besteffort-podcf8cace4_87f3_4c58_8cf8_fa93971be467.slice. Jul 2 07:49:38.776071 kubelet[2003]: I0702 07:49:38.775906 2003 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.776071 kubelet[2003]: I0702 07:49:38.775945 2003 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10c739fa-16a5-45f6-a430-ba984f83a9a0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.776071 kubelet[2003]: I0702 07:49:38.775966 2003 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10c739fa-16a5-45f6-a430-ba984f83a9a0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.776071 kubelet[2003]: I0702 07:49:38.775976 2003 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10c739fa-16a5-45f6-a430-ba984f83a9a0-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.776071 kubelet[2003]: I0702 07:49:38.775987 2003 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8ndlk\" (UniqueName: \"kubernetes.io/projected/10c739fa-16a5-45f6-a430-ba984f83a9a0-kube-api-access-8ndlk\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.776071 kubelet[2003]: I0702 07:49:38.775996 2003 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10c739fa-16a5-45f6-a430-ba984f83a9a0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:38.778767 env[1195]: time="2024-07-02T07:49:38.778708350Z" level=info msg="RemoveContainer for \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\" returns successfully" Jul 2 07:49:38.779057 kubelet[2003]: I0702 07:49:38.779027 2003 scope.go:117] "RemoveContainer" containerID="029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9" Jul 2 07:49:38.780585 env[1195]: time="2024-07-02T07:49:38.780526061Z" level=info msg="RemoveContainer for \"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9\"" Jul 2 07:49:38.792989 env[1195]: time="2024-07-02T07:49:38.792810215Z" level=info msg="RemoveContainer for \"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9\" returns successfully" Jul 2 07:49:38.793138 kubelet[2003]: I0702 07:49:38.793078 2003 scope.go:117] "RemoveContainer" containerID="a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227" Jul 2 07:49:38.795440 env[1195]: time="2024-07-02T07:49:38.795385011Z" level=info msg="RemoveContainer for \"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227\"" Jul 2 07:49:38.913085 env[1195]: time="2024-07-02T07:49:38.913031081Z" level=info msg="RemoveContainer for \"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227\" returns successfully" Jul 2 07:49:38.913370 kubelet[2003]: I0702 07:49:38.913332 2003 scope.go:117] "RemoveContainer" containerID="f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335" Jul 2 07:49:38.914787 env[1195]: time="2024-07-02T07:49:38.914754573Z" level=info msg="RemoveContainer for \"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335\"" Jul 2 07:49:39.076564 env[1195]: time="2024-07-02T07:49:39.076456574Z" level=info msg="RemoveContainer for \"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335\" returns successfully" Jul 2 07:49:39.077209 kubelet[2003]: I0702 07:49:39.076993 2003 scope.go:117] "RemoveContainer" containerID="bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0" Jul 2 07:49:39.079399 env[1195]: time="2024-07-02T07:49:39.079352992Z" level=info msg="RemoveContainer for \"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0\"" Jul 2 07:49:39.390007 env[1195]: time="2024-07-02T07:49:39.389857835Z" level=info msg="RemoveContainer for \"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0\" returns successfully" Jul 2 07:49:39.390162 kubelet[2003]: I0702 07:49:39.390119 2003 scope.go:117] "RemoveContainer" containerID="49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103" Jul 2 07:49:39.390530 env[1195]: time="2024-07-02T07:49:39.390438773Z" level=error msg="ContainerStatus for \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\": not found" Jul 2 07:49:39.390702 kubelet[2003]: E0702 07:49:39.390681 2003 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\": not found" containerID="49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103" Jul 2 07:49:39.390874 kubelet[2003]: I0702 07:49:39.390764 2003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103"} err="failed to get container status \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\": rpc error: code = NotFound desc = an error occurred when try to find container \"49f23f267d533f71f519d4a61146d65ff176279ae499e5935c3ca02b35e9b103\": not found" Jul 2 07:49:39.390874 kubelet[2003]: I0702 07:49:39.390776 2003 scope.go:117] "RemoveContainer" containerID="029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9" Jul 2 07:49:39.390932 env[1195]: time="2024-07-02T07:49:39.390904854Z" level=error msg="ContainerStatus for \"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9\": not found" Jul 2 07:49:39.391143 kubelet[2003]: E0702 07:49:39.391115 2003 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9\": not found" containerID="029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9" Jul 2 07:49:39.391211 kubelet[2003]: I0702 07:49:39.391168 2003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9"} err="failed to get container status \"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"029ba53ef7b8b5c5452259a5a7f5ce304461ad62acdacceda668237f1b3ce2d9\": not found" Jul 2 07:49:39.391211 kubelet[2003]: I0702 07:49:39.391185 2003 scope.go:117] "RemoveContainer" containerID="a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227" Jul 2 07:49:39.391400 env[1195]: time="2024-07-02T07:49:39.391357337Z" level=error msg="ContainerStatus for \"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227\": not found" Jul 2 07:49:39.391527 kubelet[2003]: E0702 07:49:39.391514 2003 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227\": not found" containerID="a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227" Jul 2 07:49:39.391587 kubelet[2003]: I0702 07:49:39.391537 2003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227"} err="failed to get container status \"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9392448edc8b9b9e60cbdced150fa0dd5122f535c574ea74dd860404255d227\": not found" Jul 2 07:49:39.391587 kubelet[2003]: I0702 07:49:39.391547 2003 scope.go:117] "RemoveContainer" containerID="f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335" Jul 2 07:49:39.391865 env[1195]: time="2024-07-02T07:49:39.391797767Z" level=error msg="ContainerStatus for \"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335\": not found" Jul 2 07:49:39.391987 kubelet[2003]: E0702 07:49:39.391974 2003 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335\": not found" containerID="f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335" Jul 2 07:49:39.392040 kubelet[2003]: I0702 07:49:39.391993 2003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335"} err="failed to get container status \"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335\": rpc error: code = NotFound desc = an error occurred when try to find container \"f42772abc0b7051de694128b45ec5fc1d81fdef117b8091cd82d5056b4e74335\": not found" Jul 2 07:49:39.392040 kubelet[2003]: I0702 07:49:39.392002 2003 scope.go:117] "RemoveContainer" containerID="bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0" Jul 2 07:49:39.392199 env[1195]: time="2024-07-02T07:49:39.392156912Z" level=error msg="ContainerStatus for \"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0\": not found" Jul 2 07:49:39.392293 kubelet[2003]: E0702 07:49:39.392281 2003 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0\": not found" containerID="bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0" Jul 2 07:49:39.392324 kubelet[2003]: I0702 07:49:39.392299 2003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0"} err="failed to get container status \"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb53d496b6721756b4ba5800e5cc8048b80a93b03715fb245cc7fd5fb02bc4d0\": not found" Jul 2 07:49:39.392324 kubelet[2003]: I0702 07:49:39.392308 2003 scope.go:117] "RemoveContainer" containerID="0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc" Jul 2 07:49:39.393338 env[1195]: time="2024-07-02T07:49:39.393316986Z" level=info msg="RemoveContainer for \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\"" Jul 2 07:49:39.483807 systemd[1]: var-lib-kubelet-pods-cf8cace4\x2d87f3\x2d4c58\x2d8cf8\x2dfa93971be467-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkxrm9.mount: Deactivated successfully. Jul 2 07:49:39.483915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875-rootfs.mount: Deactivated successfully. Jul 2 07:49:39.483996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6932b65708db706735943dba0ec4c2704f40bc9eede8bf7c03b2c156d7d85875-shm.mount: Deactivated successfully. Jul 2 07:49:39.484067 systemd[1]: var-lib-kubelet-pods-10c739fa\x2d16a5\x2d45f6\x2da430\x2dba984f83a9a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8ndlk.mount: Deactivated successfully. Jul 2 07:49:39.484132 systemd[1]: var-lib-kubelet-pods-10c739fa\x2d16a5\x2d45f6\x2da430\x2dba984f83a9a0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:49:39.484198 systemd[1]: var-lib-kubelet-pods-10c739fa\x2d16a5\x2d45f6\x2da430\x2dba984f83a9a0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:49:39.763009 env[1195]: time="2024-07-02T07:49:39.762943289Z" level=info msg="RemoveContainer for \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\" returns successfully" Jul 2 07:49:39.763321 kubelet[2003]: I0702 07:49:39.763280 2003 scope.go:117] "RemoveContainer" containerID="0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc" Jul 2 07:49:39.763685 env[1195]: time="2024-07-02T07:49:39.763611945Z" level=error msg="ContainerStatus for \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\": not found" Jul 2 07:49:39.763826 kubelet[2003]: E0702 07:49:39.763808 2003 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\": not found" containerID="0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc" Jul 2 07:49:39.763878 kubelet[2003]: I0702 07:49:39.763849 2003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc"} err="failed to get container status \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\": rpc error: code = NotFound desc = an error occurred when try to find container \"0683536691bc316812bccd01da6c42a9e10c3e08ea5f9f505510abe0e9bb6afc\": not found" Jul 2 07:49:39.875728 sshd[3624]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:39.878702 systemd[1]: sshd@22-10.0.0.87:22-10.0.0.1:52364.service: Deactivated successfully. Jul 2 07:49:39.879393 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 07:49:39.879997 systemd-logind[1189]: Session 23 logged out. Waiting for processes to exit. Jul 2 07:49:39.881171 systemd[1]: Started sshd@23-10.0.0.87:22-10.0.0.1:52370.service. Jul 2 07:49:39.882016 systemd-logind[1189]: Removed session 23. Jul 2 07:49:39.912592 sshd[3783]: Accepted publickey for core from 10.0.0.1 port 52370 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:39.913578 sshd[3783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:39.917028 systemd-logind[1189]: New session 24 of user core. Jul 2 07:49:39.917981 systemd[1]: Started session-24.scope. Jul 2 07:49:39.992511 kubelet[2003]: I0702 07:49:39.992485 2003 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="10c739fa-16a5-45f6-a430-ba984f83a9a0" path="/var/lib/kubelet/pods/10c739fa-16a5-45f6-a430-ba984f83a9a0/volumes" Jul 2 07:49:39.992981 kubelet[2003]: I0702 07:49:39.992949 2003 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cf8cace4-87f3-4c58-8cf8-fa93971be467" path="/var/lib/kubelet/pods/cf8cace4-87f3-4c58-8cf8-fa93971be467/volumes" Jul 2 07:49:40.411285 systemd[1]: Started sshd@24-10.0.0.87:22-10.0.0.1:52376.service. Jul 2 07:49:40.420144 sshd[3783]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:40.422706 systemd[1]: sshd@23-10.0.0.87:22-10.0.0.1:52370.service: Deactivated successfully. Jul 2 07:49:40.423570 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 07:49:40.424272 systemd-logind[1189]: Session 24 logged out. Waiting for processes to exit. Jul 2 07:49:40.425054 systemd-logind[1189]: Removed session 24. Jul 2 07:49:40.441917 kubelet[2003]: I0702 07:49:40.441747 2003 topology_manager.go:215] "Topology Admit Handler" podUID="9a3b503f-f470-49a3-84c1-c67a7958f10e" podNamespace="kube-system" podName="cilium-sjsp2" Jul 2 07:49:40.443186 kubelet[2003]: E0702 07:49:40.442554 2003 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10c739fa-16a5-45f6-a430-ba984f83a9a0" containerName="mount-cgroup" Jul 2 07:49:40.443186 kubelet[2003]: E0702 07:49:40.442578 2003 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10c739fa-16a5-45f6-a430-ba984f83a9a0" containerName="mount-bpf-fs" Jul 2 07:49:40.443186 kubelet[2003]: E0702 07:49:40.442588 2003 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10c739fa-16a5-45f6-a430-ba984f83a9a0" containerName="apply-sysctl-overwrites" Jul 2 07:49:40.443186 kubelet[2003]: E0702 07:49:40.442596 2003 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf8cace4-87f3-4c58-8cf8-fa93971be467" containerName="cilium-operator" Jul 2 07:49:40.443186 kubelet[2003]: E0702 07:49:40.442605 2003 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10c739fa-16a5-45f6-a430-ba984f83a9a0" containerName="clean-cilium-state" Jul 2 07:49:40.443186 kubelet[2003]: E0702 07:49:40.442613 2003 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10c739fa-16a5-45f6-a430-ba984f83a9a0" containerName="cilium-agent" Jul 2 07:49:40.443186 kubelet[2003]: I0702 07:49:40.442640 2003 memory_manager.go:346] "RemoveStaleState removing state" podUID="cf8cace4-87f3-4c58-8cf8-fa93971be467" containerName="cilium-operator" Jul 2 07:49:40.443186 kubelet[2003]: I0702 07:49:40.442648 2003 memory_manager.go:346] "RemoveStaleState removing state" podUID="10c739fa-16a5-45f6-a430-ba984f83a9a0" containerName="cilium-agent" Jul 2 07:49:40.448795 systemd[1]: Created slice kubepods-burstable-pod9a3b503f_f470_49a3_84c1_c67a7958f10e.slice. Jul 2 07:49:40.456662 sshd[3794]: Accepted publickey for core from 10.0.0.1 port 52376 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:40.458384 sshd[3794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:40.470066 systemd[1]: Started session-25.scope. Jul 2 07:49:40.470732 systemd-logind[1189]: New session 25 of user core. Jul 2 07:49:40.484699 kubelet[2003]: I0702 07:49:40.484657 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-run\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.484948 kubelet[2003]: I0702 07:49:40.484933 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cni-path\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.485119 kubelet[2003]: I0702 07:49:40.485103 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-host-proc-sys-kernel\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.485245 kubelet[2003]: I0702 07:49:40.485230 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-xtables-lock\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.485351 kubelet[2003]: I0702 07:49:40.485335 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-cgroup\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.485486 kubelet[2003]: I0702 07:49:40.485470 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a3b503f-f470-49a3-84c1-c67a7958f10e-clustermesh-secrets\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.485600 kubelet[2003]: I0702 07:49:40.485585 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-bpf-maps\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.485712 kubelet[2003]: I0702 07:49:40.485697 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mrvr\" (UniqueName: \"kubernetes.io/projected/9a3b503f-f470-49a3-84c1-c67a7958f10e-kube-api-access-7mrvr\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.485831 kubelet[2003]: I0702 07:49:40.485815 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-etc-cni-netd\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.486117 kubelet[2003]: I0702 07:49:40.486102 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-lib-modules\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.486227 kubelet[2003]: I0702 07:49:40.486212 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-host-proc-sys-net\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.486338 kubelet[2003]: I0702 07:49:40.486323 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a3b503f-f470-49a3-84c1-c67a7958f10e-hubble-tls\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.486451 kubelet[2003]: I0702 07:49:40.486436 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-hostproc\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.486570 kubelet[2003]: I0702 07:49:40.486554 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-config-path\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.486699 kubelet[2003]: I0702 07:49:40.486679 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-ipsec-secrets\") pod \"cilium-sjsp2\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " pod="kube-system/cilium-sjsp2" Jul 2 07:49:40.604604 sshd[3794]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:40.610120 systemd[1]: sshd@24-10.0.0.87:22-10.0.0.1:52376.service: Deactivated successfully. Jul 2 07:49:40.610786 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 07:49:40.612662 systemd-logind[1189]: Session 25 logged out. Waiting for processes to exit. Jul 2 07:49:40.612785 systemd[1]: Started sshd@25-10.0.0.87:22-10.0.0.1:52382.service. Jul 2 07:49:40.614083 systemd-logind[1189]: Removed session 25. Jul 2 07:49:40.645935 sshd[3812]: Accepted publickey for core from 10.0.0.1 port 52382 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:49:40.647176 sshd[3812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:49:40.650947 systemd-logind[1189]: New session 26 of user core. Jul 2 07:49:40.651773 systemd[1]: Started session-26.scope. Jul 2 07:49:40.656364 kubelet[2003]: E0702 07:49:40.656330 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:40.657559 env[1195]: time="2024-07-02T07:49:40.657137747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sjsp2,Uid:9a3b503f-f470-49a3-84c1-c67a7958f10e,Namespace:kube-system,Attempt:0,}" Jul 2 07:49:40.954648 env[1195]: time="2024-07-02T07:49:40.954547867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:49:40.954648 env[1195]: time="2024-07-02T07:49:40.954605306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:49:40.954648 env[1195]: time="2024-07-02T07:49:40.954622910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:49:40.955037 env[1195]: time="2024-07-02T07:49:40.954800559Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af pid=3831 runtime=io.containerd.runc.v2 Jul 2 07:49:40.966260 systemd[1]: Started cri-containerd-0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af.scope. Jul 2 07:49:40.990631 kubelet[2003]: E0702 07:49:40.990590 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:40.991052 env[1195]: time="2024-07-02T07:49:40.989892431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sjsp2,Uid:9a3b503f-f470-49a3-84c1-c67a7958f10e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af\"" Jul 2 07:49:40.992073 kubelet[2003]: E0702 07:49:40.991811 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:40.993618 env[1195]: time="2024-07-02T07:49:40.993586789Z" level=info msg="CreateContainer within sandbox \"0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:49:41.047127 kubelet[2003]: E0702 07:49:41.047083 2003 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:49:41.047647 env[1195]: time="2024-07-02T07:49:41.047598236Z" level=info msg="CreateContainer within sandbox \"0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6\"" Jul 2 07:49:41.048172 env[1195]: time="2024-07-02T07:49:41.048144308Z" level=info msg="StartContainer for \"00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6\"" Jul 2 07:49:41.062939 systemd[1]: Started cri-containerd-00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6.scope. Jul 2 07:49:41.072136 systemd[1]: cri-containerd-00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6.scope: Deactivated successfully. Jul 2 07:49:41.072467 systemd[1]: Stopped cri-containerd-00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6.scope. Jul 2 07:49:41.091355 env[1195]: time="2024-07-02T07:49:41.091299267Z" level=info msg="shim disconnected" id=00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6 Jul 2 07:49:41.091355 env[1195]: time="2024-07-02T07:49:41.091355414Z" level=warning msg="cleaning up after shim disconnected" id=00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6 namespace=k8s.io Jul 2 07:49:41.091632 env[1195]: time="2024-07-02T07:49:41.091366435Z" level=info msg="cleaning up dead shim" Jul 2 07:49:41.099484 env[1195]: time="2024-07-02T07:49:41.098760184Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3889 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:49:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 07:49:41.100805 env[1195]: time="2024-07-02T07:49:41.099794266Z" level=error msg="copy shim log" error="read /proc/self/fd/27: file already closed" Jul 2 07:49:41.101078 env[1195]: time="2024-07-02T07:49:41.101025884Z" level=error msg="Failed to pipe stderr of container \"00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6\"" error="reading from a closed fifo" Jul 2 07:49:41.102118 env[1195]: time="2024-07-02T07:49:41.102049626Z" level=error msg="Failed to pipe stdout of container \"00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6\"" error="reading from a closed fifo" Jul 2 07:49:41.105238 env[1195]: time="2024-07-02T07:49:41.105160658Z" level=error msg="StartContainer for \"00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 07:49:41.105494 kubelet[2003]: E0702 07:49:41.105461 2003 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6" Jul 2 07:49:41.105685 kubelet[2003]: E0702 07:49:41.105666 2003 kuberuntime_manager.go:1261] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 07:49:41.105685 kubelet[2003]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 07:49:41.105685 kubelet[2003]: rm /hostbin/cilium-mount Jul 2 07:49:41.105818 kubelet[2003]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7mrvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-sjsp2_kube-system(9a3b503f-f470-49a3-84c1-c67a7958f10e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 07:49:41.105818 kubelet[2003]: E0702 07:49:41.105714 2003 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-sjsp2" podUID="9a3b503f-f470-49a3-84c1-c67a7958f10e" Jul 2 07:49:41.726068 env[1195]: time="2024-07-02T07:49:41.726003198Z" level=info msg="StopPodSandbox for \"0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af\"" Jul 2 07:49:41.726260 env[1195]: time="2024-07-02T07:49:41.726073893Z" level=info msg="Container to stop \"00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:41.728846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af-shm.mount: Deactivated successfully. Jul 2 07:49:41.734709 systemd[1]: cri-containerd-0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af.scope: Deactivated successfully. Jul 2 07:49:41.755234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af-rootfs.mount: Deactivated successfully. Jul 2 07:49:41.759387 env[1195]: time="2024-07-02T07:49:41.759344494Z" level=info msg="shim disconnected" id=0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af Jul 2 07:49:41.759513 env[1195]: time="2024-07-02T07:49:41.759389370Z" level=warning msg="cleaning up after shim disconnected" id=0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af namespace=k8s.io Jul 2 07:49:41.759513 env[1195]: time="2024-07-02T07:49:41.759399769Z" level=info msg="cleaning up dead shim" Jul 2 07:49:41.765545 env[1195]: time="2024-07-02T07:49:41.765477659Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3919 runtime=io.containerd.runc.v2\n" Jul 2 07:49:41.765806 env[1195]: time="2024-07-02T07:49:41.765774044Z" level=info msg="TearDown network for sandbox \"0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af\" successfully" Jul 2 07:49:41.765806 env[1195]: time="2024-07-02T07:49:41.765796347Z" level=info msg="StopPodSandbox for \"0c9d682e737afb83b40d432aeb93bbf046cd82f75fe70aa9ecb053819cb597af\" returns successfully" Jul 2 07:49:41.794315 kubelet[2003]: I0702 07:49:41.794263 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-host-proc-sys-net\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794315 kubelet[2003]: I0702 07:49:41.794309 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-config-path\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794315 kubelet[2003]: I0702 07:49:41.794328 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a3b503f-f470-49a3-84c1-c67a7958f10e-hubble-tls\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794343 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-bpf-maps\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794361 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-ipsec-secrets\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794376 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-xtables-lock\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794371 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794392 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mrvr\" (UniqueName: \"kubernetes.io/projected/9a3b503f-f470-49a3-84c1-c67a7958f10e-kube-api-access-7mrvr\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794406 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-lib-modules\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794412 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794434 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a3b503f-f470-49a3-84c1-c67a7958f10e-clustermesh-secrets\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794451 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-host-proc-sys-kernel\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794466 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-hostproc\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794480 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-run\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794494 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cni-path\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794507 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-cgroup\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794523 2003 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-etc-cni-netd\") pod \"9a3b503f-f470-49a3-84c1-c67a7958f10e\" (UID: \"9a3b503f-f470-49a3-84c1-c67a7958f10e\") " Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794551 2003 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.794628 kubelet[2003]: I0702 07:49:41.794561 2003 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.795257 kubelet[2003]: I0702 07:49:41.794586 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:41.795257 kubelet[2003]: I0702 07:49:41.794782 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:41.795257 kubelet[2003]: I0702 07:49:41.794821 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:41.795257 kubelet[2003]: I0702 07:49:41.795083 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:41.795257 kubelet[2003]: I0702 07:49:41.795111 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-hostproc" (OuterVolumeSpecName: "hostproc") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:41.795257 kubelet[2003]: I0702 07:49:41.795141 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cni-path" (OuterVolumeSpecName: "cni-path") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:41.795257 kubelet[2003]: I0702 07:49:41.795159 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:41.795530 kubelet[2003]: I0702 07:49:41.795391 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:41.796366 kubelet[2003]: I0702 07:49:41.796340 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:49:41.798843 kubelet[2003]: I0702 07:49:41.798752 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:49:41.798843 kubelet[2003]: I0702 07:49:41.798805 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a3b503f-f470-49a3-84c1-c67a7958f10e-kube-api-access-7mrvr" (OuterVolumeSpecName: "kube-api-access-7mrvr") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "kube-api-access-7mrvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:49:41.799003 systemd[1]: var-lib-kubelet-pods-9a3b503f\x2df470\x2d49a3\x2d84c1\x2dc67a7958f10e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 07:49:41.799119 systemd[1]: var-lib-kubelet-pods-9a3b503f\x2df470\x2d49a3\x2d84c1\x2dc67a7958f10e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:49:41.799261 kubelet[2003]: I0702 07:49:41.799243 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a3b503f-f470-49a3-84c1-c67a7958f10e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:49:41.799664 kubelet[2003]: I0702 07:49:41.799617 2003 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3b503f-f470-49a3-84c1-c67a7958f10e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9a3b503f-f470-49a3-84c1-c67a7958f10e" (UID: "9a3b503f-f470-49a3-84c1-c67a7958f10e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:49:41.801632 systemd[1]: var-lib-kubelet-pods-9a3b503f\x2df470\x2d49a3\x2d84c1\x2dc67a7958f10e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7mrvr.mount: Deactivated successfully. Jul 2 07:49:41.801723 systemd[1]: var-lib-kubelet-pods-9a3b503f\x2df470\x2d49a3\x2d84c1\x2dc67a7958f10e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:49:41.894730 kubelet[2003]: I0702 07:49:41.894649 2003 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.894730 kubelet[2003]: I0702 07:49:41.894699 2003 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a3b503f-f470-49a3-84c1-c67a7958f10e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.894730 kubelet[2003]: I0702 07:49:41.894713 2003 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.894730 kubelet[2003]: I0702 07:49:41.894724 2003 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.894730 kubelet[2003]: I0702 07:49:41.894736 2003 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.894730 kubelet[2003]: I0702 07:49:41.894749 2003 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7mrvr\" (UniqueName: \"kubernetes.io/projected/9a3b503f-f470-49a3-84c1-c67a7958f10e-kube-api-access-7mrvr\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.895142 kubelet[2003]: I0702 07:49:41.894759 2003 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.895142 kubelet[2003]: I0702 07:49:41.894771 2003 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a3b503f-f470-49a3-84c1-c67a7958f10e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.895142 kubelet[2003]: I0702 07:49:41.894782 2003 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.895142 kubelet[2003]: I0702 07:49:41.894792 2003 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.895142 kubelet[2003]: I0702 07:49:41.894803 2003 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.895142 kubelet[2003]: I0702 07:49:41.894813 2003 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.895142 kubelet[2003]: I0702 07:49:41.894823 2003 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a3b503f-f470-49a3-84c1-c67a7958f10e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 07:49:41.990915 kubelet[2003]: E0702 07:49:41.990875 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:41.996279 systemd[1]: Removed slice kubepods-burstable-pod9a3b503f_f470_49a3_84c1_c67a7958f10e.slice. Jul 2 07:49:42.728973 kubelet[2003]: I0702 07:49:42.728938 2003 scope.go:117] "RemoveContainer" containerID="00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6" Jul 2 07:49:42.731000 env[1195]: time="2024-07-02T07:49:42.730937420Z" level=info msg="RemoveContainer for \"00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6\"" Jul 2 07:49:42.839596 env[1195]: time="2024-07-02T07:49:42.839537591Z" level=info msg="RemoveContainer for \"00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6\" returns successfully" Jul 2 07:49:42.999973 kubelet[2003]: I0702 07:49:42.999859 2003 topology_manager.go:215] "Topology Admit Handler" podUID="67622516-1ba0-46fd-9a58-ad033efd7e81" podNamespace="kube-system" podName="cilium-dcdfz" Jul 2 07:49:43.000403 kubelet[2003]: E0702 07:49:43.000376 2003 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a3b503f-f470-49a3-84c1-c67a7958f10e" containerName="mount-cgroup" Jul 2 07:49:43.000403 kubelet[2003]: I0702 07:49:43.000416 2003 memory_manager.go:346] "RemoveStaleState removing state" podUID="9a3b503f-f470-49a3-84c1-c67a7958f10e" containerName="mount-cgroup" Jul 2 07:49:43.005421 systemd[1]: Created slice kubepods-burstable-pod67622516_1ba0_46fd_9a58_ad033efd7e81.slice. Jul 2 07:49:43.101612 kubelet[2003]: I0702 07:49:43.101561 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67622516-1ba0-46fd-9a58-ad033efd7e81-lib-modules\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.101612 kubelet[2003]: I0702 07:49:43.101613 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67622516-1ba0-46fd-9a58-ad033efd7e81-cilium-run\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.101797 kubelet[2003]: I0702 07:49:43.101736 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67622516-1ba0-46fd-9a58-ad033efd7e81-bpf-maps\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.101825 kubelet[2003]: I0702 07:49:43.101801 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67622516-1ba0-46fd-9a58-ad033efd7e81-etc-cni-netd\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.101858 kubelet[2003]: I0702 07:49:43.101830 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67622516-1ba0-46fd-9a58-ad033efd7e81-cilium-config-path\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.101885 kubelet[2003]: I0702 07:49:43.101857 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/67622516-1ba0-46fd-9a58-ad033efd7e81-cilium-ipsec-secrets\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.101951 kubelet[2003]: I0702 07:49:43.101927 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67622516-1ba0-46fd-9a58-ad033efd7e81-host-proc-sys-net\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.102005 kubelet[2003]: I0702 07:49:43.101987 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67622516-1ba0-46fd-9a58-ad033efd7e81-cni-path\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.102034 kubelet[2003]: I0702 07:49:43.102007 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67622516-1ba0-46fd-9a58-ad033efd7e81-clustermesh-secrets\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.102081 kubelet[2003]: I0702 07:49:43.102060 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67622516-1ba0-46fd-9a58-ad033efd7e81-hubble-tls\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.102132 kubelet[2003]: I0702 07:49:43.102104 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mpjg\" (UniqueName: \"kubernetes.io/projected/67622516-1ba0-46fd-9a58-ad033efd7e81-kube-api-access-9mpjg\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.102172 kubelet[2003]: I0702 07:49:43.102137 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67622516-1ba0-46fd-9a58-ad033efd7e81-hostproc\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.102202 kubelet[2003]: I0702 07:49:43.102176 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67622516-1ba0-46fd-9a58-ad033efd7e81-cilium-cgroup\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.102202 kubelet[2003]: I0702 07:49:43.102196 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67622516-1ba0-46fd-9a58-ad033efd7e81-xtables-lock\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.102250 kubelet[2003]: I0702 07:49:43.102222 2003 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67622516-1ba0-46fd-9a58-ad033efd7e81-host-proc-sys-kernel\") pod \"cilium-dcdfz\" (UID: \"67622516-1ba0-46fd-9a58-ad033efd7e81\") " pod="kube-system/cilium-dcdfz" Jul 2 07:49:43.608589 kubelet[2003]: E0702 07:49:43.608529 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:43.609095 env[1195]: time="2024-07-02T07:49:43.609031674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dcdfz,Uid:67622516-1ba0-46fd-9a58-ad033efd7e81,Namespace:kube-system,Attempt:0,}" Jul 2 07:49:43.991950 kubelet[2003]: I0702 07:49:43.991922 2003 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9a3b503f-f470-49a3-84c1-c67a7958f10e" path="/var/lib/kubelet/pods/9a3b503f-f470-49a3-84c1-c67a7958f10e/volumes" Jul 2 07:49:44.182505 env[1195]: time="2024-07-02T07:49:44.182426443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:49:44.182505 env[1195]: time="2024-07-02T07:49:44.182470307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:49:44.182505 env[1195]: time="2024-07-02T07:49:44.182484063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:49:44.182839 env[1195]: time="2024-07-02T07:49:44.182664316Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373 pid=3947 runtime=io.containerd.runc.v2 Jul 2 07:49:44.192439 systemd[1]: Started cri-containerd-9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373.scope. Jul 2 07:49:44.197494 kubelet[2003]: W0702 07:49:44.197444 2003 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a3b503f_f470_49a3_84c1_c67a7958f10e.slice/cri-containerd-00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6.scope WatchSource:0}: container "00cf3b67f8b15775631a3506dae33c4e0bee354b54edf820f9aefa47357b14b6" in namespace "k8s.io": not found Jul 2 07:49:44.212691 env[1195]: time="2024-07-02T07:49:44.212650769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dcdfz,Uid:67622516-1ba0-46fd-9a58-ad033efd7e81,Namespace:kube-system,Attempt:0,} returns sandbox id \"9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373\"" Jul 2 07:49:44.213139 kubelet[2003]: E0702 07:49:44.213113 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:44.215190 env[1195]: time="2024-07-02T07:49:44.215158466Z" level=info msg="CreateContainer within sandbox \"9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:49:44.560383 env[1195]: time="2024-07-02T07:49:44.560320717Z" level=info msg="CreateContainer within sandbox \"9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43a2be204d085060b1786aff9b8acfecfcbb4516f7fe8679d2ffdfeff35b6bf9\"" Jul 2 07:49:44.561159 env[1195]: time="2024-07-02T07:49:44.561122364Z" level=info msg="StartContainer for \"43a2be204d085060b1786aff9b8acfecfcbb4516f7fe8679d2ffdfeff35b6bf9\"" Jul 2 07:49:44.577010 systemd[1]: Started cri-containerd-43a2be204d085060b1786aff9b8acfecfcbb4516f7fe8679d2ffdfeff35b6bf9.scope. Jul 2 07:49:44.645951 systemd[1]: cri-containerd-43a2be204d085060b1786aff9b8acfecfcbb4516f7fe8679d2ffdfeff35b6bf9.scope: Deactivated successfully. Jul 2 07:49:44.674922 env[1195]: time="2024-07-02T07:49:44.674858528Z" level=info msg="StartContainer for \"43a2be204d085060b1786aff9b8acfecfcbb4516f7fe8679d2ffdfeff35b6bf9\" returns successfully" Jul 2 07:49:44.735285 kubelet[2003]: E0702 07:49:44.735258 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:44.856758 env[1195]: time="2024-07-02T07:49:44.856627619Z" level=info msg="shim disconnected" id=43a2be204d085060b1786aff9b8acfecfcbb4516f7fe8679d2ffdfeff35b6bf9 Jul 2 07:49:44.856758 env[1195]: time="2024-07-02T07:49:44.856671412Z" level=warning msg="cleaning up after shim disconnected" id=43a2be204d085060b1786aff9b8acfecfcbb4516f7fe8679d2ffdfeff35b6bf9 namespace=k8s.io Jul 2 07:49:44.856758 env[1195]: time="2024-07-02T07:49:44.856680629Z" level=info msg="cleaning up dead shim" Jul 2 07:49:44.862440 env[1195]: time="2024-07-02T07:49:44.862383552Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4030 runtime=io.containerd.runc.v2\n" Jul 2 07:49:45.208016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43a2be204d085060b1786aff9b8acfecfcbb4516f7fe8679d2ffdfeff35b6bf9-rootfs.mount: Deactivated successfully. Jul 2 07:49:45.739129 kubelet[2003]: E0702 07:49:45.739094 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:45.741099 env[1195]: time="2024-07-02T07:49:45.741052845Z" level=info msg="CreateContainer within sandbox \"9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:49:45.791542 env[1195]: time="2024-07-02T07:49:45.791487078Z" level=info msg="CreateContainer within sandbox \"9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1edf368c69f54742a36ec4ad5a58ef04f1122a9f7e7d6fe6785581f977d89cd2\"" Jul 2 07:49:45.792066 env[1195]: time="2024-07-02T07:49:45.792017167Z" level=info msg="StartContainer for \"1edf368c69f54742a36ec4ad5a58ef04f1122a9f7e7d6fe6785581f977d89cd2\"" Jul 2 07:49:45.807408 systemd[1]: Started cri-containerd-1edf368c69f54742a36ec4ad5a58ef04f1122a9f7e7d6fe6785581f977d89cd2.scope. Jul 2 07:49:45.826923 env[1195]: time="2024-07-02T07:49:45.826851879Z" level=info msg="StartContainer for \"1edf368c69f54742a36ec4ad5a58ef04f1122a9f7e7d6fe6785581f977d89cd2\" returns successfully" Jul 2 07:49:45.831594 systemd[1]: cri-containerd-1edf368c69f54742a36ec4ad5a58ef04f1122a9f7e7d6fe6785581f977d89cd2.scope: Deactivated successfully. Jul 2 07:49:45.850491 env[1195]: time="2024-07-02T07:49:45.850444428Z" level=info msg="shim disconnected" id=1edf368c69f54742a36ec4ad5a58ef04f1122a9f7e7d6fe6785581f977d89cd2 Jul 2 07:49:45.850491 env[1195]: time="2024-07-02T07:49:45.850491578Z" level=warning msg="cleaning up after shim disconnected" id=1edf368c69f54742a36ec4ad5a58ef04f1122a9f7e7d6fe6785581f977d89cd2 namespace=k8s.io Jul 2 07:49:45.850660 env[1195]: time="2024-07-02T07:49:45.850500515Z" level=info msg="cleaning up dead shim" Jul 2 07:49:45.857700 env[1195]: time="2024-07-02T07:49:45.857655417Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4091 runtime=io.containerd.runc.v2\n" Jul 2 07:49:46.048091 kubelet[2003]: E0702 07:49:46.048017 2003 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:49:46.207765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1edf368c69f54742a36ec4ad5a58ef04f1122a9f7e7d6fe6785581f977d89cd2-rootfs.mount: Deactivated successfully. Jul 2 07:49:46.742230 kubelet[2003]: E0702 07:49:46.742204 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:46.744696 env[1195]: time="2024-07-02T07:49:46.744651998Z" level=info msg="CreateContainer within sandbox \"9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:49:46.761904 env[1195]: time="2024-07-02T07:49:46.761795216Z" level=info msg="CreateContainer within sandbox \"9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3542a35c30198907a0053cc97bb0e3ce58e7ce24c8c0bb6aac6bc2413ebff5f3\"" Jul 2 07:49:46.762506 env[1195]: time="2024-07-02T07:49:46.762476833Z" level=info msg="StartContainer for \"3542a35c30198907a0053cc97bb0e3ce58e7ce24c8c0bb6aac6bc2413ebff5f3\"" Jul 2 07:49:46.777241 systemd[1]: Started cri-containerd-3542a35c30198907a0053cc97bb0e3ce58e7ce24c8c0bb6aac6bc2413ebff5f3.scope. Jul 2 07:49:46.805533 env[1195]: time="2024-07-02T07:49:46.805483715Z" level=info msg="StartContainer for \"3542a35c30198907a0053cc97bb0e3ce58e7ce24c8c0bb6aac6bc2413ebff5f3\" returns successfully" Jul 2 07:49:46.807393 systemd[1]: cri-containerd-3542a35c30198907a0053cc97bb0e3ce58e7ce24c8c0bb6aac6bc2413ebff5f3.scope: Deactivated successfully. Jul 2 07:49:46.833018 env[1195]: time="2024-07-02T07:49:46.832950772Z" level=info msg="shim disconnected" id=3542a35c30198907a0053cc97bb0e3ce58e7ce24c8c0bb6aac6bc2413ebff5f3 Jul 2 07:49:46.833203 env[1195]: time="2024-07-02T07:49:46.833020284Z" level=warning msg="cleaning up after shim disconnected" id=3542a35c30198907a0053cc97bb0e3ce58e7ce24c8c0bb6aac6bc2413ebff5f3 namespace=k8s.io Jul 2 07:49:46.833203 env[1195]: time="2024-07-02T07:49:46.833032297Z" level=info msg="cleaning up dead shim" Jul 2 07:49:46.838942 env[1195]: time="2024-07-02T07:49:46.838892861Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4146 runtime=io.containerd.runc.v2\n" Jul 2 07:49:47.093944 kubelet[2003]: I0702 07:49:47.093366 2003 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T07:49:47Z","lastTransitionTime":"2024-07-02T07:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 07:49:47.207666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3542a35c30198907a0053cc97bb0e3ce58e7ce24c8c0bb6aac6bc2413ebff5f3-rootfs.mount: Deactivated successfully. Jul 2 07:49:47.745596 kubelet[2003]: E0702 07:49:47.745570 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:47.747632 env[1195]: time="2024-07-02T07:49:47.747585601Z" level=info msg="CreateContainer within sandbox \"9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:49:47.763442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2392654323.mount: Deactivated successfully. Jul 2 07:49:47.777505 env[1195]: time="2024-07-02T07:49:47.777420104Z" level=info msg="CreateContainer within sandbox \"9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2961f5b359021248258628d55c2cf206e6ad0d21283e7bcac8ed5d4141e37e3f\"" Jul 2 07:49:47.778312 env[1195]: time="2024-07-02T07:49:47.778273737Z" level=info msg="StartContainer for \"2961f5b359021248258628d55c2cf206e6ad0d21283e7bcac8ed5d4141e37e3f\"" Jul 2 07:49:47.810664 systemd[1]: Started cri-containerd-2961f5b359021248258628d55c2cf206e6ad0d21283e7bcac8ed5d4141e37e3f.scope. Jul 2 07:49:47.832940 systemd[1]: cri-containerd-2961f5b359021248258628d55c2cf206e6ad0d21283e7bcac8ed5d4141e37e3f.scope: Deactivated successfully. Jul 2 07:49:47.833859 env[1195]: time="2024-07-02T07:49:47.833826622Z" level=info msg="StartContainer for \"2961f5b359021248258628d55c2cf206e6ad0d21283e7bcac8ed5d4141e37e3f\" returns successfully" Jul 2 07:49:47.850714 env[1195]: time="2024-07-02T07:49:47.850652685Z" level=info msg="shim disconnected" id=2961f5b359021248258628d55c2cf206e6ad0d21283e7bcac8ed5d4141e37e3f Jul 2 07:49:47.850714 env[1195]: time="2024-07-02T07:49:47.850702280Z" level=warning msg="cleaning up after shim disconnected" id=2961f5b359021248258628d55c2cf206e6ad0d21283e7bcac8ed5d4141e37e3f namespace=k8s.io Jul 2 07:49:47.850714 env[1195]: time="2024-07-02T07:49:47.850712219Z" level=info msg="cleaning up dead shim" Jul 2 07:49:47.856923 env[1195]: time="2024-07-02T07:49:47.856899040Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4201 runtime=io.containerd.runc.v2\n" Jul 2 07:49:48.748944 kubelet[2003]: E0702 07:49:48.748916 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:48.755838 env[1195]: time="2024-07-02T07:49:48.755786099Z" level=info msg="CreateContainer within sandbox \"9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:49:48.769973 env[1195]: time="2024-07-02T07:49:48.769908590Z" level=info msg="CreateContainer within sandbox \"9755eebb5ac6be5296af4dd51463798c7add2cdb2743925f65bfd19c54e28373\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c8986140397d7074869d66e17d88c7bdd7ac8ce79cd298f8c80f4bcab6bd1987\"" Jul 2 07:49:48.770426 env[1195]: time="2024-07-02T07:49:48.770402549Z" level=info msg="StartContainer for \"c8986140397d7074869d66e17d88c7bdd7ac8ce79cd298f8c80f4bcab6bd1987\"" Jul 2 07:49:48.787004 systemd[1]: Started cri-containerd-c8986140397d7074869d66e17d88c7bdd7ac8ce79cd298f8c80f4bcab6bd1987.scope. Jul 2 07:49:48.808374 env[1195]: time="2024-07-02T07:49:48.808322582Z" level=info msg="StartContainer for \"c8986140397d7074869d66e17d88c7bdd7ac8ce79cd298f8c80f4bcab6bd1987\" returns successfully" Jul 2 07:49:48.990183 kubelet[2003]: E0702 07:49:48.990150 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:49.042982 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:49:49.207849 systemd[1]: run-containerd-runc-k8s.io-c8986140397d7074869d66e17d88c7bdd7ac8ce79cd298f8c80f4bcab6bd1987-runc.JvhYmI.mount: Deactivated successfully. Jul 2 07:49:49.753186 kubelet[2003]: E0702 07:49:49.753163 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:49.763070 kubelet[2003]: I0702 07:49:49.763032 2003 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dcdfz" podStartSLOduration=7.762998951 podCreationTimestamp="2024-07-02 07:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:49:49.762984163 +0000 UTC m=+93.872038619" watchObservedRunningTime="2024-07-02 07:49:49.762998951 +0000 UTC m=+93.872053407" Jul 2 07:49:49.990132 kubelet[2003]: E0702 07:49:49.990101 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:50.166110 systemd[1]: run-containerd-runc-k8s.io-c8986140397d7074869d66e17d88c7bdd7ac8ce79cd298f8c80f4bcab6bd1987-runc.4ENVk0.mount: Deactivated successfully. Jul 2 07:49:51.465423 systemd-networkd[1019]: lxc_health: Link UP Jul 2 07:49:51.471992 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:49:51.472075 systemd-networkd[1019]: lxc_health: Gained carrier Jul 2 07:49:51.609696 kubelet[2003]: E0702 07:49:51.609672 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:51.756702 kubelet[2003]: E0702 07:49:51.756662 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:52.488722 systemd[1]: run-containerd-runc-k8s.io-c8986140397d7074869d66e17d88c7bdd7ac8ce79cd298f8c80f4bcab6bd1987-runc.79hROr.mount: Deactivated successfully. Jul 2 07:49:52.758565 kubelet[2003]: E0702 07:49:52.758449 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:53.413109 systemd-networkd[1019]: lxc_health: Gained IPv6LL Jul 2 07:49:53.760222 kubelet[2003]: E0702 07:49:53.760204 2003 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:54.588026 systemd[1]: run-containerd-runc-k8s.io-c8986140397d7074869d66e17d88c7bdd7ac8ce79cd298f8c80f4bcab6bd1987-runc.VhY01e.mount: Deactivated successfully. Jul 2 07:49:58.760671 systemd[1]: run-containerd-runc-k8s.io-c8986140397d7074869d66e17d88c7bdd7ac8ce79cd298f8c80f4bcab6bd1987-runc.mWEgna.mount: Deactivated successfully. Jul 2 07:49:58.800747 sshd[3812]: pam_unix(sshd:session): session closed for user core Jul 2 07:49:58.803135 systemd[1]: sshd@25-10.0.0.87:22-10.0.0.1:52382.service: Deactivated successfully. Jul 2 07:49:58.803846 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 07:49:58.804374 systemd-logind[1189]: Session 26 logged out. Waiting for processes to exit. Jul 2 07:49:58.805181 systemd-logind[1189]: Removed session 26.