Jul 2 07:48:23.899628 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:48:23.899648 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:48:23.899658 kernel: BIOS-provided physical RAM map: Jul 2 07:48:23.899664 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:48:23.899669 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 07:48:23.899675 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 07:48:23.899681 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 07:48:23.899687 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 07:48:23.899693 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 07:48:23.899700 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 07:48:23.899705 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 2 07:48:23.899711 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 2 07:48:23.899716 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 07:48:23.899722 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 07:48:23.899729 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 07:48:23.899737 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 07:48:23.899743 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 07:48:23.899749 kernel: NX (Execute Disable) protection: active Jul 2 07:48:23.899755 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Jul 2 07:48:23.899761 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Jul 2 07:48:23.899767 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Jul 2 07:48:23.899773 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Jul 2 07:48:23.899781 kernel: extended physical RAM map: Jul 2 07:48:23.899787 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:48:23.899793 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 07:48:23.899800 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 07:48:23.899807 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 07:48:23.899813 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 07:48:23.899818 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 07:48:23.899824 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 07:48:23.899830 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Jul 2 07:48:23.899836 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Jul 2 07:48:23.899842 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Jul 2 07:48:23.899848 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Jul 2 07:48:23.899854 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Jul 2 07:48:23.899860 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 2 07:48:23.899867 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 07:48:23.899873 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 07:48:23.899879 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 07:48:23.899885 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 07:48:23.899894 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 07:48:23.899900 kernel: efi: EFI v2.70 by EDK II Jul 2 07:48:23.899907 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Jul 2 07:48:23.899915 kernel: random: crng init done Jul 2 07:48:23.899921 kernel: SMBIOS 2.8 present. Jul 2 07:48:23.899927 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jul 2 07:48:23.899934 kernel: Hypervisor detected: KVM Jul 2 07:48:23.899940 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:48:23.899947 kernel: kvm-clock: cpu 0, msr 14192001, primary cpu clock Jul 2 07:48:23.899953 kernel: kvm-clock: using sched offset of 5535725535 cycles Jul 2 07:48:23.899960 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:48:23.899967 kernel: tsc: Detected 2794.748 MHz processor Jul 2 07:48:23.899977 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:48:23.899995 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:48:23.900002 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 2 07:48:23.900008 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:48:23.900015 kernel: Using GB pages for direct mapping Jul 2 07:48:23.900022 kernel: Secure boot disabled Jul 2 07:48:23.900028 kernel: ACPI: Early table checksum verification disabled Jul 2 07:48:23.900035 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 2 07:48:23.900041 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jul 2 07:48:23.900050 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:48:23.900056 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:48:23.900063 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 2 07:48:23.900069 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:48:23.900076 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:48:23.900083 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:48:23.900089 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 2 07:48:23.900096 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jul 2 07:48:23.900105 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jul 2 07:48:23.900113 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 2 07:48:23.900120 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jul 2 07:48:23.900126 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jul 2 07:48:23.900133 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jul 2 07:48:23.900139 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jul 2 07:48:23.900146 kernel: No NUMA configuration found Jul 2 07:48:23.900152 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 2 07:48:23.900159 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 2 07:48:23.900165 kernel: Zone ranges: Jul 2 07:48:23.900174 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:48:23.900180 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 2 07:48:23.900187 kernel: Normal empty Jul 2 07:48:23.900195 kernel: Movable zone start for each node Jul 2 07:48:23.900202 kernel: Early memory node ranges Jul 2 07:48:23.900208 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 07:48:23.900215 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 2 07:48:23.900221 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 2 07:48:23.900228 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 2 07:48:23.900236 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 2 07:48:23.900242 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 2 07:48:23.900249 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 2 07:48:23.900256 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:48:23.900262 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 07:48:23.900269 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 2 07:48:23.900275 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:48:23.900282 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 2 07:48:23.900288 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 2 07:48:23.900296 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 2 07:48:23.900302 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 07:48:23.900309 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:48:23.900315 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:48:23.900322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 07:48:23.900328 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:48:23.900335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:48:23.900341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:48:23.900348 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:48:23.900355 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:48:23.900362 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 07:48:23.900368 kernel: TSC deadline timer available Jul 2 07:48:23.900375 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 07:48:23.900381 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 07:48:23.900388 kernel: kvm-guest: setup PV sched yield Jul 2 07:48:23.900397 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jul 2 07:48:23.900403 kernel: Booting paravirtualized kernel on KVM Jul 2 07:48:23.900410 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:48:23.900419 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 2 07:48:23.900427 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 2 07:48:23.900434 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 2 07:48:23.900446 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 07:48:23.900454 kernel: kvm-guest: setup async PF for cpu 0 Jul 2 07:48:23.900461 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Jul 2 07:48:23.900467 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:48:23.900474 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:48:23.900481 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 2 07:48:23.900488 kernel: Policy zone: DMA32 Jul 2 07:48:23.900496 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:48:23.900503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:48:23.900512 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:48:23.900519 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 07:48:23.900525 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:48:23.900533 kernel: Memory: 2398372K/2567000K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 168368K reserved, 0K cma-reserved) Jul 2 07:48:23.900548 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 07:48:23.900556 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:48:23.900563 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:48:23.900570 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:48:23.900577 kernel: rcu: RCU event tracing is enabled. Jul 2 07:48:23.900584 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 07:48:23.900591 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:48:23.900598 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:48:23.900605 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:48:23.900613 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 07:48:23.900620 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 07:48:23.900627 kernel: Console: colour dummy device 80x25 Jul 2 07:48:23.900634 kernel: printk: console [ttyS0] enabled Jul 2 07:48:23.900641 kernel: ACPI: Core revision 20210730 Jul 2 07:48:23.900648 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 07:48:23.900655 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:48:23.900662 kernel: x2apic enabled Jul 2 07:48:23.900669 kernel: Switched APIC routing to physical x2apic. Jul 2 07:48:23.900676 kernel: kvm-guest: setup PV IPIs Jul 2 07:48:23.900684 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 07:48:23.900691 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 07:48:23.900698 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 2 07:48:23.900705 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 07:48:23.900712 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 07:48:23.900718 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 07:48:23.900725 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:48:23.900732 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:48:23.900741 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:48:23.900748 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:48:23.900754 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 07:48:23.900761 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 07:48:23.900771 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:48:23.900778 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:48:23.900785 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:48:23.900792 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:48:23.900802 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:48:23.900810 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:48:23.900817 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:48:23.900824 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:48:23.900831 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:48:23.900838 kernel: LSM: Security Framework initializing Jul 2 07:48:23.900845 kernel: SELinux: Initializing. Jul 2 07:48:23.900852 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:48:23.900859 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:48:23.900866 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 07:48:23.900875 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 07:48:23.900882 kernel: ... version: 0 Jul 2 07:48:23.900889 kernel: ... bit width: 48 Jul 2 07:48:23.900896 kernel: ... generic registers: 6 Jul 2 07:48:23.900903 kernel: ... value mask: 0000ffffffffffff Jul 2 07:48:23.900910 kernel: ... max period: 00007fffffffffff Jul 2 07:48:23.900916 kernel: ... fixed-purpose events: 0 Jul 2 07:48:23.900923 kernel: ... event mask: 000000000000003f Jul 2 07:48:23.900930 kernel: signal: max sigframe size: 1776 Jul 2 07:48:23.900939 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:48:23.900946 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:48:23.900953 kernel: x86: Booting SMP configuration: Jul 2 07:48:23.900960 kernel: .... node #0, CPUs: #1 Jul 2 07:48:23.900967 kernel: kvm-clock: cpu 1, msr 14192041, secondary cpu clock Jul 2 07:48:23.900973 kernel: kvm-guest: setup async PF for cpu 1 Jul 2 07:48:23.900990 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Jul 2 07:48:23.900999 kernel: #2 Jul 2 07:48:23.901007 kernel: kvm-clock: cpu 2, msr 14192081, secondary cpu clock Jul 2 07:48:23.901015 kernel: kvm-guest: setup async PF for cpu 2 Jul 2 07:48:23.901025 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Jul 2 07:48:23.901031 kernel: #3 Jul 2 07:48:23.901038 kernel: kvm-clock: cpu 3, msr 141920c1, secondary cpu clock Jul 2 07:48:23.901045 kernel: kvm-guest: setup async PF for cpu 3 Jul 2 07:48:23.901052 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Jul 2 07:48:23.901059 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 07:48:23.901066 kernel: smpboot: Max logical packages: 1 Jul 2 07:48:23.901073 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 2 07:48:23.901080 kernel: devtmpfs: initialized Jul 2 07:48:23.901088 kernel: x86/mm: Memory block size: 128MB Jul 2 07:48:23.901095 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 2 07:48:23.901102 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 2 07:48:23.901109 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 2 07:48:23.901118 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 2 07:48:23.901125 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 2 07:48:23.901132 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:48:23.901140 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 07:48:23.901146 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:48:23.901155 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:48:23.901162 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:48:23.901169 kernel: audit: type=2000 audit(1719906503.378:1): state=initialized audit_enabled=0 res=1 Jul 2 07:48:23.901175 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:48:23.901182 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:48:23.901189 kernel: cpuidle: using governor menu Jul 2 07:48:23.901196 kernel: ACPI: bus type PCI registered Jul 2 07:48:23.901203 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:48:23.901210 kernel: dca service started, version 1.12.1 Jul 2 07:48:23.901218 kernel: PCI: Using configuration type 1 for base access Jul 2 07:48:23.901225 kernel: PCI: Using configuration type 1 for extended access Jul 2 07:48:23.901232 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:48:23.901239 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:48:23.901247 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:48:23.901253 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:48:23.901260 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:48:23.901267 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:48:23.901274 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:48:23.901283 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:48:23.901290 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:48:23.901297 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:48:23.901304 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:48:23.901310 kernel: ACPI: Interpreter enabled Jul 2 07:48:23.901317 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:48:23.901324 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:48:23.901331 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:48:23.901338 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 07:48:23.901346 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:48:23.901493 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:48:23.901505 kernel: acpiphp: Slot [3] registered Jul 2 07:48:23.901512 kernel: acpiphp: Slot [4] registered Jul 2 07:48:23.901519 kernel: acpiphp: Slot [5] registered Jul 2 07:48:23.901526 kernel: acpiphp: Slot [6] registered Jul 2 07:48:23.901533 kernel: acpiphp: Slot [7] registered Jul 2 07:48:23.901548 kernel: acpiphp: Slot [8] registered Jul 2 07:48:23.901557 kernel: acpiphp: Slot [9] registered Jul 2 07:48:23.901564 kernel: acpiphp: Slot [10] registered Jul 2 07:48:23.901571 kernel: acpiphp: Slot [11] registered Jul 2 07:48:23.901578 kernel: acpiphp: Slot [12] registered Jul 2 07:48:23.901584 kernel: acpiphp: Slot [13] registered Jul 2 07:48:23.901591 kernel: acpiphp: Slot [14] registered Jul 2 07:48:23.901598 kernel: acpiphp: Slot [15] registered Jul 2 07:48:23.901605 kernel: acpiphp: Slot [16] registered Jul 2 07:48:23.901612 kernel: acpiphp: Slot [17] registered Jul 2 07:48:23.901619 kernel: acpiphp: Slot [18] registered Jul 2 07:48:23.901627 kernel: acpiphp: Slot [19] registered Jul 2 07:48:23.901634 kernel: acpiphp: Slot [20] registered Jul 2 07:48:23.901640 kernel: acpiphp: Slot [21] registered Jul 2 07:48:23.901647 kernel: acpiphp: Slot [22] registered Jul 2 07:48:23.901654 kernel: acpiphp: Slot [23] registered Jul 2 07:48:23.901661 kernel: acpiphp: Slot [24] registered Jul 2 07:48:23.901668 kernel: acpiphp: Slot [25] registered Jul 2 07:48:23.901674 kernel: acpiphp: Slot [26] registered Jul 2 07:48:23.901681 kernel: acpiphp: Slot [27] registered Jul 2 07:48:23.901689 kernel: acpiphp: Slot [28] registered Jul 2 07:48:23.901696 kernel: acpiphp: Slot [29] registered Jul 2 07:48:23.901703 kernel: acpiphp: Slot [30] registered Jul 2 07:48:23.901710 kernel: acpiphp: Slot [31] registered Jul 2 07:48:23.901717 kernel: PCI host bridge to bus 0000:00 Jul 2 07:48:23.901809 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:48:23.901877 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:48:23.901944 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:48:23.902062 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 07:48:23.902132 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jul 2 07:48:23.902198 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:48:23.902297 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:48:23.902412 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 07:48:23.902527 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 07:48:23.902620 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 07:48:23.902697 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 07:48:23.902772 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 07:48:23.903010 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 07:48:23.903145 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 07:48:23.903268 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:48:23.903367 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 07:48:23.903469 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 07:48:23.903603 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 07:48:23.903705 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 2 07:48:23.903804 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jul 2 07:48:23.903945 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 2 07:48:23.904060 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jul 2 07:48:23.904154 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 07:48:23.904279 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:48:23.904379 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 07:48:23.904483 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 2 07:48:23.904593 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 2 07:48:23.904711 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 07:48:23.904814 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 07:48:23.904912 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 2 07:48:23.905967 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 2 07:48:23.906115 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:48:23.906229 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:48:23.906328 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jul 2 07:48:23.906424 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 2 07:48:23.906521 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 2 07:48:23.906535 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:48:23.906563 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:48:23.906573 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:48:23.906583 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:48:23.906592 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:48:23.906602 kernel: iommu: Default domain type: Translated Jul 2 07:48:23.906612 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:48:23.906714 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 07:48:23.906811 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 07:48:23.906913 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 07:48:23.906930 kernel: vgaarb: loaded Jul 2 07:48:23.906941 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:48:23.906951 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:48:23.906995 kernel: PTP clock support registered Jul 2 07:48:23.907008 kernel: Registered efivars operations Jul 2 07:48:23.907019 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:48:23.907030 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:48:23.907041 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 2 07:48:23.907051 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 2 07:48:23.907064 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Jul 2 07:48:23.907074 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Jul 2 07:48:23.907083 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 2 07:48:23.907093 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 2 07:48:23.907103 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 07:48:23.907113 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 07:48:23.907123 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:48:23.907132 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:48:23.907145 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:48:23.907168 kernel: pnp: PnP ACPI init Jul 2 07:48:23.907303 kernel: pnp 00:02: [dma 2] Jul 2 07:48:23.907319 kernel: pnp: PnP ACPI: found 6 devices Jul 2 07:48:23.907329 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:48:23.907339 kernel: NET: Registered PF_INET protocol family Jul 2 07:48:23.907349 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:48:23.907359 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 07:48:23.907369 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:48:23.907383 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 07:48:23.907393 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 07:48:23.907403 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 07:48:23.907413 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:48:23.907423 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:48:23.907433 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:48:23.907444 kernel: NET: Registered PF_XDP protocol family Jul 2 07:48:23.907559 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 2 07:48:23.907680 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 2 07:48:23.907777 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:48:23.907876 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:48:23.909125 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:48:23.909272 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 07:48:23.909385 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jul 2 07:48:23.909524 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 07:48:23.909659 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:48:23.909802 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 07:48:23.909819 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:48:23.909829 kernel: Initialise system trusted keyrings Jul 2 07:48:23.909840 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 07:48:23.909850 kernel: Key type asymmetric registered Jul 2 07:48:23.909860 kernel: Asymmetric key parser 'x509' registered Jul 2 07:48:23.909879 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:48:23.909897 kernel: io scheduler mq-deadline registered Jul 2 07:48:23.909911 kernel: io scheduler kyber registered Jul 2 07:48:23.909921 kernel: io scheduler bfq registered Jul 2 07:48:23.909931 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:48:23.909942 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:48:23.909953 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:48:23.909978 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:48:23.910001 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:48:23.910012 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:48:23.910022 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:48:23.910035 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:48:23.910060 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:48:23.910247 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 07:48:23.910269 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:48:23.910414 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 07:48:23.910559 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T07:48:23 UTC (1719906503) Jul 2 07:48:23.910690 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 07:48:23.910719 kernel: efifb: probing for efifb Jul 2 07:48:23.910730 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 2 07:48:23.910740 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 2 07:48:23.910751 kernel: efifb: scrolling: redraw Jul 2 07:48:23.910775 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 07:48:23.910786 kernel: Console: switching to colour frame buffer device 160x50 Jul 2 07:48:23.910796 kernel: fb0: EFI VGA frame buffer device Jul 2 07:48:23.910824 kernel: pstore: Registered efi as persistent store backend Jul 2 07:48:23.910835 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:48:23.910845 kernel: Segment Routing with IPv6 Jul 2 07:48:23.910864 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:48:23.910881 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:48:23.910894 kernel: Key type dns_resolver registered Jul 2 07:48:23.910904 kernel: IPI shorthand broadcast: enabled Jul 2 07:48:23.910929 kernel: sched_clock: Marking stable (444077806, 123662393)->(622632348, -54892149) Jul 2 07:48:23.910940 kernel: registered taskstats version 1 Jul 2 07:48:23.910953 kernel: Loading compiled-in X.509 certificates Jul 2 07:48:23.910964 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:48:23.910974 kernel: Key type .fscrypt registered Jul 2 07:48:23.911019 kernel: Key type fscrypt-provisioning registered Jul 2 07:48:23.911030 kernel: pstore: Using crash dump compression: deflate Jul 2 07:48:23.911044 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:48:23.911054 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:48:23.911064 kernel: ima: No architecture policies found Jul 2 07:48:23.911074 kernel: clk: Disabling unused clocks Jul 2 07:48:23.911087 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:48:23.911098 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:48:23.911108 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:48:23.911119 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:48:23.911129 kernel: Run /init as init process Jul 2 07:48:23.911158 kernel: with arguments: Jul 2 07:48:23.911194 kernel: /init Jul 2 07:48:23.911216 kernel: with environment: Jul 2 07:48:23.911226 kernel: HOME=/ Jul 2 07:48:23.911239 kernel: TERM=linux Jul 2 07:48:23.911249 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:48:23.911262 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:48:23.911276 systemd[1]: Detected virtualization kvm. Jul 2 07:48:23.911287 systemd[1]: Detected architecture x86-64. Jul 2 07:48:23.911298 systemd[1]: Running in initrd. Jul 2 07:48:23.911308 systemd[1]: No hostname configured, using default hostname. Jul 2 07:48:23.911318 systemd[1]: Hostname set to . Jul 2 07:48:23.911332 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:48:23.911343 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:48:23.911353 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:48:23.911364 systemd[1]: Reached target cryptsetup.target. Jul 2 07:48:23.911374 systemd[1]: Reached target paths.target. Jul 2 07:48:23.911404 systemd[1]: Reached target slices.target. Jul 2 07:48:23.911416 systemd[1]: Reached target swap.target. Jul 2 07:48:23.911427 systemd[1]: Reached target timers.target. Jul 2 07:48:23.911441 systemd[1]: Listening on iscsid.socket. Jul 2 07:48:23.911452 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:48:23.911463 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:48:23.911474 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:48:23.911485 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:48:23.911496 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:48:23.911506 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:48:23.911518 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:48:23.911531 systemd[1]: Reached target sockets.target. Jul 2 07:48:23.911551 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:48:23.911563 systemd[1]: Finished network-cleanup.service. Jul 2 07:48:23.911574 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:48:23.911585 systemd[1]: Starting systemd-journald.service... Jul 2 07:48:23.911596 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:48:23.911607 systemd[1]: Starting systemd-resolved.service... Jul 2 07:48:23.911617 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:48:23.911628 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:48:23.911642 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:48:23.911654 kernel: audit: type=1130 audit(1719906503.898:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.911665 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:48:23.911676 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:48:23.911690 systemd-journald[197]: Journal started Jul 2 07:48:23.911756 systemd-journald[197]: Runtime Journal (/run/log/journal/8d8e79fe77734dd899fb541ccfcba420) is 6.0M, max 48.4M, 42.4M free. Jul 2 07:48:23.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.911428 systemd-modules-load[198]: Inserted module 'overlay' Jul 2 07:48:23.918318 kernel: audit: type=1130 audit(1719906503.911:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.918337 systemd[1]: Started systemd-journald.service. Jul 2 07:48:23.918351 kernel: audit: type=1130 audit(1719906503.918:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.919261 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:48:23.919594 systemd-resolved[199]: Positive Trust Anchors: Jul 2 07:48:23.928193 kernel: audit: type=1130 audit(1719906503.923:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.928220 kernel: audit: type=1130 audit(1719906503.927:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.919603 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:48:23.919629 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:48:23.921802 systemd-resolved[199]: Defaulting to hostname 'linux'. Jul 2 07:48:23.923386 systemd[1]: Started systemd-resolved.service. Jul 2 07:48:23.928326 systemd[1]: Reached target nss-lookup.target. Jul 2 07:48:23.942916 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:48:23.951007 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:48:23.955231 systemd-modules-load[198]: Inserted module 'br_netfilter' Jul 2 07:48:23.956007 kernel: Bridge firewalling registered Jul 2 07:48:23.959616 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:48:23.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.962546 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:48:23.966275 kernel: audit: type=1130 audit(1719906503.960:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.970932 dracut-cmdline[215]: dracut-dracut-053 Jul 2 07:48:23.973036 kernel: SCSI subsystem initialized Jul 2 07:48:23.973059 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:48:23.986350 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:48:23.986417 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:48:23.987735 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:48:23.990770 systemd-modules-load[198]: Inserted module 'dm_multipath' Jul 2 07:48:23.991867 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:48:23.996558 kernel: audit: type=1130 audit(1719906503.990:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:23.992658 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:48:24.002163 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:48:24.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:24.007003 kernel: audit: type=1130 audit(1719906504.003:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:24.034006 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:48:24.050013 kernel: iscsi: registered transport (tcp) Jul 2 07:48:24.074014 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:48:24.074052 kernel: QLogic iSCSI HBA Driver Jul 2 07:48:24.108085 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:48:24.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:24.111062 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:48:24.114662 kernel: audit: type=1130 audit(1719906504.108:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:24.158008 kernel: raid6: avx2x4 gen() 26411 MB/s Jul 2 07:48:24.185007 kernel: raid6: avx2x4 xor() 6589 MB/s Jul 2 07:48:24.202004 kernel: raid6: avx2x2 gen() 25817 MB/s Jul 2 07:48:24.219005 kernel: raid6: avx2x2 xor() 17871 MB/s Jul 2 07:48:24.236008 kernel: raid6: avx2x1 gen() 23027 MB/s Jul 2 07:48:24.253006 kernel: raid6: avx2x1 xor() 14189 MB/s Jul 2 07:48:24.270006 kernel: raid6: sse2x4 gen() 13823 MB/s Jul 2 07:48:24.287005 kernel: raid6: sse2x4 xor() 6901 MB/s Jul 2 07:48:24.304009 kernel: raid6: sse2x2 gen() 15423 MB/s Jul 2 07:48:24.321010 kernel: raid6: sse2x2 xor() 9829 MB/s Jul 2 07:48:24.338006 kernel: raid6: sse2x1 gen() 11881 MB/s Jul 2 07:48:24.355338 kernel: raid6: sse2x1 xor() 7789 MB/s Jul 2 07:48:24.355358 kernel: raid6: using algorithm avx2x4 gen() 26411 MB/s Jul 2 07:48:24.355368 kernel: raid6: .... xor() 6589 MB/s, rmw enabled Jul 2 07:48:24.357004 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:48:24.369008 kernel: xor: automatically using best checksumming function avx Jul 2 07:48:24.461030 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:48:24.471302 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:48:24.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:24.472000 audit: BPF prog-id=7 op=LOAD Jul 2 07:48:24.472000 audit: BPF prog-id=8 op=LOAD Jul 2 07:48:24.473842 systemd[1]: Starting systemd-udevd.service... Jul 2 07:48:24.488876 systemd-udevd[399]: Using default interface naming scheme 'v252'. Jul 2 07:48:24.493195 systemd[1]: Started systemd-udevd.service. Jul 2 07:48:24.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:24.497221 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:48:24.510126 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jul 2 07:48:24.538183 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:48:24.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:24.540570 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:48:24.578151 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:48:24.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:24.619004 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 07:48:24.621040 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:48:24.628450 kernel: libata version 3.00 loaded. Jul 2 07:48:24.628484 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:48:24.629186 kernel: GPT:9289727 != 19775487 Jul 2 07:48:24.629207 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:48:24.630078 kernel: GPT:9289727 != 19775487 Jul 2 07:48:24.631343 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:48:24.631366 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:48:24.633008 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 07:48:24.640349 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:48:24.640373 kernel: AES CTR mode by8 optimization enabled Jul 2 07:48:24.641167 kernel: scsi host0: ata_piix Jul 2 07:48:24.644363 kernel: scsi host1: ata_piix Jul 2 07:48:24.644502 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 07:48:24.644514 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 07:48:24.653015 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Jul 2 07:48:24.655141 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:48:24.656365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:48:24.662991 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:48:24.669554 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:48:24.674823 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:48:24.677709 systemd[1]: Starting disk-uuid.service... Jul 2 07:48:24.684723 disk-uuid[520]: Primary Header is updated. Jul 2 07:48:24.684723 disk-uuid[520]: Secondary Entries is updated. Jul 2 07:48:24.684723 disk-uuid[520]: Secondary Header is updated. Jul 2 07:48:24.689022 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:48:24.693013 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:48:24.803089 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 07:48:24.806030 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 07:48:24.836386 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 07:48:24.836633 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 07:48:24.854106 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 07:48:25.693968 disk-uuid[521]: The operation has completed successfully. Jul 2 07:48:25.695840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:48:25.798609 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:48:25.798747 systemd[1]: Finished disk-uuid.service. Jul 2 07:48:25.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:25.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:25.801567 systemd[1]: Starting verity-setup.service... Jul 2 07:48:25.815029 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 07:48:25.833110 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:48:25.836585 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:48:25.839643 systemd[1]: Finished verity-setup.service. Jul 2 07:48:25.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:25.907011 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:48:25.907383 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:48:25.907680 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:48:25.908671 systemd[1]: Starting ignition-setup.service... Jul 2 07:48:25.910439 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:48:25.919278 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:48:25.919312 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:48:25.919322 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:48:25.931306 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:48:25.939266 systemd[1]: Finished ignition-setup.service. Jul 2 07:48:25.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:25.940181 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:48:25.993443 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:48:25.995888 systemd[1]: Starting systemd-networkd.service... Jul 2 07:48:25.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:25.994000 audit: BPF prog-id=9 op=LOAD Jul 2 07:48:26.031974 ignition[633]: Ignition 2.14.0 Jul 2 07:48:26.031996 ignition[633]: Stage: fetch-offline Jul 2 07:48:26.032082 ignition[633]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:48:26.032092 ignition[633]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:48:26.032223 ignition[633]: parsed url from cmdline: "" Jul 2 07:48:26.032226 ignition[633]: no config URL provided Jul 2 07:48:26.032230 ignition[633]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:48:26.032237 ignition[633]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:48:26.032257 ignition[633]: op(1): [started] loading QEMU firmware config module Jul 2 07:48:26.032261 ignition[633]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 07:48:26.035814 ignition[633]: op(1): [finished] loading QEMU firmware config module Jul 2 07:48:26.035837 ignition[633]: QEMU firmware config was not found. Ignoring... Jul 2 07:48:26.037028 ignition[633]: parsing config with SHA512: 8b82e333bceb5ffd2be5274fd0f1bf7e8c50377f252e5351fc059f431fd8c0fc240e7a29f5b5720420f6516f8f174eef0490708253f78484ca8053062d9bb0bb Jul 2 07:48:26.047412 unknown[633]: fetched base config from "system" Jul 2 07:48:26.047867 ignition[633]: fetch-offline: fetch-offline passed Jul 2 07:48:26.047426 unknown[633]: fetched user config from "qemu" Jul 2 07:48:26.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.047960 ignition[633]: Ignition finished successfully Jul 2 07:48:26.049228 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:48:26.063376 systemd-networkd[712]: lo: Link UP Jul 2 07:48:26.063385 systemd-networkd[712]: lo: Gained carrier Jul 2 07:48:26.065153 systemd-networkd[712]: Enumeration completed Jul 2 07:48:26.065370 systemd[1]: Started systemd-networkd.service. Jul 2 07:48:26.067537 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:48:26.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.067743 systemd[1]: Reached target network.target. Jul 2 07:48:26.069679 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 07:48:26.070823 systemd[1]: Starting ignition-kargs.service... Jul 2 07:48:26.074243 systemd-networkd[712]: eth0: Link UP Jul 2 07:48:26.074255 systemd-networkd[712]: eth0: Gained carrier Jul 2 07:48:26.074573 systemd[1]: Starting iscsiuio.service... Jul 2 07:48:26.087248 ignition[716]: Ignition 2.14.0 Jul 2 07:48:26.087259 ignition[716]: Stage: kargs Jul 2 07:48:26.087353 ignition[716]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:48:26.087363 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:48:26.088127 ignition[716]: kargs: kargs passed Jul 2 07:48:26.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.089993 systemd[1]: Finished ignition-kargs.service. Jul 2 07:48:26.088171 ignition[716]: Ignition finished successfully Jul 2 07:48:26.092597 systemd[1]: Starting ignition-disks.service... Jul 2 07:48:26.100193 ignition[725]: Ignition 2.14.0 Jul 2 07:48:26.100203 ignition[725]: Stage: disks Jul 2 07:48:26.100314 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:48:26.102494 systemd[1]: Finished ignition-disks.service. Jul 2 07:48:26.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.100325 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:48:26.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.104208 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:48:26.101200 ignition[725]: disks: disks passed Jul 2 07:48:26.105796 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:48:26.113410 iscsid[732]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:48:26.113410 iscsid[732]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 07:48:26.113410 iscsid[732]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:48:26.113410 iscsid[732]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:48:26.113410 iscsid[732]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:48:26.113410 iscsid[732]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:48:26.113410 iscsid[732]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:48:26.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.101248 ignition[725]: Ignition finished successfully Jul 2 07:48:26.105866 systemd[1]: Reached target local-fs.target. Jul 2 07:48:26.106210 systemd[1]: Reached target sysinit.target. Jul 2 07:48:26.106373 systemd[1]: Reached target basic.target. Jul 2 07:48:26.106708 systemd[1]: Started iscsiuio.service. Jul 2 07:48:26.107928 systemd[1]: Starting iscsid.service... Jul 2 07:48:26.112144 systemd-networkd[712]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:48:26.115595 systemd[1]: Started iscsid.service. Jul 2 07:48:26.119964 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:48:26.134245 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:48:26.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.135547 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:48:26.137043 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:48:26.137938 systemd[1]: Reached target remote-fs.target. Jul 2 07:48:26.139501 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:48:26.150161 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:48:26.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.151752 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:48:26.188110 systemd-fsck[748]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 07:48:26.193334 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:48:26.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.196436 systemd[1]: Mounting sysroot.mount... Jul 2 07:48:26.205009 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:48:26.205016 systemd[1]: Mounted sysroot.mount. Jul 2 07:48:26.205805 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:48:26.208199 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:48:26.208577 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:48:26.208622 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:48:26.208651 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:48:26.211397 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:48:26.213137 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:48:26.217753 initrd-setup-root[758]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:48:26.221215 initrd-setup-root[766]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:48:26.224438 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:48:26.228660 initrd-setup-root[782]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:48:26.255523 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:48:26.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.257215 systemd[1]: Starting ignition-mount.service... Jul 2 07:48:26.258568 systemd[1]: Starting sysroot-boot.service... Jul 2 07:48:26.263164 bash[799]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 07:48:26.274898 ignition[801]: INFO : Ignition 2.14.0 Jul 2 07:48:26.274898 ignition[801]: INFO : Stage: mount Jul 2 07:48:26.276823 ignition[801]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:48:26.276823 ignition[801]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:48:26.276823 ignition[801]: INFO : mount: mount passed Jul 2 07:48:26.276823 ignition[801]: INFO : Ignition finished successfully Jul 2 07:48:26.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:26.276576 systemd[1]: Finished ignition-mount.service. Jul 2 07:48:26.277793 systemd[1]: Finished sysroot-boot.service. Jul 2 07:48:26.846285 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:48:26.855016 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) Jul 2 07:48:26.857093 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:48:26.857109 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:48:26.857119 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:48:26.861271 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:48:26.863846 systemd[1]: Starting ignition-files.service... Jul 2 07:48:26.881598 ignition[829]: INFO : Ignition 2.14.0 Jul 2 07:48:26.881598 ignition[829]: INFO : Stage: files Jul 2 07:48:26.883597 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:48:26.883597 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:48:26.883597 ignition[829]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:48:26.887359 ignition[829]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:48:26.887359 ignition[829]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:48:26.887359 ignition[829]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:48:26.887359 ignition[829]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:48:26.887359 ignition[829]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:48:26.887359 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:48:26.887359 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:48:26.887359 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:48:26.887359 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:48:26.887359 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:48:26.887359 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:48:26.887359 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:48:26.887359 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 07:48:26.885726 unknown[829]: wrote ssh authorized keys file for user: core Jul 2 07:48:27.262441 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 2 07:48:27.677729 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:48:27.677729 ignition[829]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 2 07:48:27.681920 ignition[829]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:48:27.681920 ignition[829]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:48:27.681920 ignition[829]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 2 07:48:27.681920 ignition[829]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 07:48:27.681920 ignition[829]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:48:27.698854 ignition[829]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:48:27.701563 ignition[829]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 07:48:27.701563 ignition[829]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:48:27.701563 ignition[829]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:48:27.701563 ignition[829]: INFO : files: files passed Jul 2 07:48:27.701563 ignition[829]: INFO : Ignition finished successfully Jul 2 07:48:27.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.700413 systemd[1]: Finished ignition-files.service. Jul 2 07:48:27.702352 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:48:27.704088 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:48:27.718645 initrd-setup-root-after-ignition[853]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 2 07:48:27.704674 systemd[1]: Starting ignition-quench.service... Jul 2 07:48:27.721470 initrd-setup-root-after-ignition[855]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:48:27.709240 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:48:27.710949 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:48:27.711048 systemd[1]: Finished ignition-quench.service. Jul 2 07:48:27.712692 systemd[1]: Reached target ignition-complete.target. Jul 2 07:48:27.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.713586 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:48:27.726170 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:48:27.726286 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:48:27.727485 systemd[1]: Reached target initrd-fs.target. Jul 2 07:48:27.729152 systemd[1]: Reached target initrd.target. Jul 2 07:48:27.730131 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:48:27.731136 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:48:27.741226 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:48:27.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.744176 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:48:27.753857 systemd[1]: Stopped target network.target. Jul 2 07:48:27.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.754082 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:48:27.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.787132 ignition[869]: INFO : Ignition 2.14.0 Jul 2 07:48:27.787132 ignition[869]: INFO : Stage: umount Jul 2 07:48:27.787132 ignition[869]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:48:27.787132 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:48:27.787132 ignition[869]: INFO : umount: umount passed Jul 2 07:48:27.787132 ignition[869]: INFO : Ignition finished successfully Jul 2 07:48:27.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.754475 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:48:27.797000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:48:27.754852 systemd[1]: Stopped target timers.target. Jul 2 07:48:27.755377 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:48:27.755548 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:48:27.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.756083 systemd[1]: Stopped target initrd.target. Jul 2 07:48:27.756494 systemd[1]: Stopped target basic.target. Jul 2 07:48:27.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.756854 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:48:27.757402 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:48:27.757770 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:48:27.758325 systemd[1]: Stopped target remote-fs.target. Jul 2 07:48:27.758688 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:48:27.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.759080 systemd[1]: Stopped target sysinit.target. Jul 2 07:48:27.759427 systemd[1]: Stopped target local-fs.target. Jul 2 07:48:27.759789 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:48:27.760335 systemd[1]: Stopped target swap.target. Jul 2 07:48:27.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.760660 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:48:27.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.760797 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:48:27.761531 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:48:27.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.761943 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:48:27.762078 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:48:27.762584 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:48:27.762731 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:48:27.763206 systemd[1]: Stopped target paths.target. Jul 2 07:48:27.763602 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:48:27.767052 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:48:27.767564 systemd[1]: Stopped target slices.target. Jul 2 07:48:27.767914 systemd[1]: Stopped target sockets.target. Jul 2 07:48:27.768453 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:48:27.768561 systemd[1]: Closed iscsid.socket. Jul 2 07:48:27.769047 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:48:27.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.769160 systemd[1]: Closed iscsiuio.socket. Jul 2 07:48:27.769599 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:48:27.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:27.769752 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:48:27.770173 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:48:27.770310 systemd[1]: Stopped ignition-files.service. Jul 2 07:48:27.771851 systemd[1]: Stopping ignition-mount.service... Jul 2 07:48:27.773259 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:48:27.774073 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:48:27.774514 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:48:27.774857 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:48:27.775085 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:48:27.775620 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:48:27.775772 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:48:27.780202 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:48:27.780295 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:48:27.780760 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:48:27.780835 systemd[1]: Stopped ignition-mount.service. Jul 2 07:48:27.782125 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:48:27.782164 systemd[1]: Stopped ignition-disks.service. Jul 2 07:48:27.783880 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:48:27.783916 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:48:27.784923 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:48:27.784956 systemd[1]: Stopped ignition-setup.service. Jul 2 07:48:27.786747 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:48:27.789480 systemd-networkd[712]: eth0: DHCPv6 lease lost Jul 2 07:48:27.869000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:48:27.790666 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:48:27.790751 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:48:27.792843 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:48:27.792920 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:48:27.794770 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:48:27.794816 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:48:27.798822 systemd[1]: Stopping network-cleanup.service... Jul 2 07:48:27.799807 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:48:27.799868 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:48:27.801873 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:48:27.801926 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:48:27.803924 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:48:27.803972 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:48:27.886289 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Jul 2 07:48:27.886340 iscsid[732]: iscsid shutting down. Jul 2 07:48:27.805874 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:48:27.808824 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:48:27.812382 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:48:27.812498 systemd[1]: Stopped network-cleanup.service. Jul 2 07:48:27.815834 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:48:27.816008 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:48:27.818143 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:48:27.818183 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:48:27.819318 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:48:27.819381 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:48:27.819466 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:48:27.819501 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:48:27.819683 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:48:27.819712 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:48:27.819857 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:48:27.819885 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:48:27.820909 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:48:27.821311 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 07:48:27.821358 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 07:48:27.824352 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:48:27.824385 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:48:27.826311 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:48:27.826344 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:48:27.828064 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 07:48:27.828414 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:48:27.828487 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:48:27.840445 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:48:27.840569 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:48:27.842184 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:48:27.843898 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:48:27.843951 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:48:27.846591 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:48:27.860035 systemd[1]: Switching root. Jul 2 07:48:27.893498 systemd-journald[197]: Journal stopped Jul 2 07:48:30.465834 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:48:30.465882 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:48:30.465893 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:48:30.465903 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:48:30.465920 kernel: SELinux: policy capability open_perms=1 Jul 2 07:48:30.465929 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:48:30.465939 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:48:30.465950 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:48:30.465959 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:48:30.465968 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:48:30.465978 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:48:30.466001 systemd[1]: Successfully loaded SELinux policy in 39.075ms. Jul 2 07:48:30.466017 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.802ms. Jul 2 07:48:30.466029 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:48:30.466039 systemd[1]: Detected virtualization kvm. Jul 2 07:48:30.466049 systemd[1]: Detected architecture x86-64. Jul 2 07:48:30.466063 systemd[1]: Detected first boot. Jul 2 07:48:30.466073 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:48:30.466083 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:48:30.466095 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:48:30.466105 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:48:30.466117 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:48:30.466130 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:48:30.466142 kernel: kauditd_printk_skb: 80 callbacks suppressed Jul 2 07:48:30.466151 kernel: audit: type=1334 audit(1719906510.313:84): prog-id=12 op=LOAD Jul 2 07:48:30.466161 kernel: audit: type=1334 audit(1719906510.313:85): prog-id=3 op=UNLOAD Jul 2 07:48:30.466170 kernel: audit: type=1334 audit(1719906510.315:86): prog-id=13 op=LOAD Jul 2 07:48:30.466180 kernel: audit: type=1334 audit(1719906510.316:87): prog-id=14 op=LOAD Jul 2 07:48:30.466189 kernel: audit: type=1334 audit(1719906510.316:88): prog-id=4 op=UNLOAD Jul 2 07:48:30.466199 kernel: audit: type=1334 audit(1719906510.316:89): prog-id=5 op=UNLOAD Jul 2 07:48:30.466209 kernel: audit: type=1131 audit(1719906510.318:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.466220 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:48:30.466231 systemd[1]: Stopped iscsiuio.service. Jul 2 07:48:30.466241 kernel: audit: type=1131 audit(1719906510.325:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.466251 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:48:30.466261 systemd[1]: Stopped iscsid.service. Jul 2 07:48:30.466271 kernel: audit: type=1131 audit(1719906510.332:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.466281 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:48:30.466293 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:48:30.466303 kernel: audit: type=1130 audit(1719906510.338:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.466315 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:48:30.466326 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:48:30.466336 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:48:30.466346 systemd[1]: Created slice system-getty.slice. Jul 2 07:48:30.466356 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:48:30.466375 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:48:30.466386 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:48:30.466399 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:48:30.466409 systemd[1]: Created slice user.slice. Jul 2 07:48:30.466423 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:48:30.466433 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:48:30.466442 systemd[1]: Set up automount boot.automount. Jul 2 07:48:30.466453 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:48:30.466463 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:48:30.466475 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:48:30.466485 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:48:30.466498 systemd[1]: Reached target integritysetup.target. Jul 2 07:48:30.466511 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:48:30.466522 systemd[1]: Reached target remote-fs.target. Jul 2 07:48:30.466532 systemd[1]: Reached target slices.target. Jul 2 07:48:30.466544 systemd[1]: Reached target swap.target. Jul 2 07:48:30.466555 systemd[1]: Reached target torcx.target. Jul 2 07:48:30.466565 systemd[1]: Reached target veritysetup.target. Jul 2 07:48:30.466575 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:48:30.466585 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:48:30.466596 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:48:30.466606 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:48:30.466616 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:48:30.466626 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:48:30.466636 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:48:30.466648 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:48:30.466658 systemd[1]: Mounting media.mount... Jul 2 07:48:30.466668 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:48:30.466678 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:48:30.466688 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:48:30.466699 systemd[1]: Mounting tmp.mount... Jul 2 07:48:30.466709 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:48:30.466719 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:48:30.466729 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:48:30.466740 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:48:30.466750 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:48:30.466760 systemd[1]: Starting modprobe@drm.service... Jul 2 07:48:30.466772 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:48:30.466782 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:48:30.466793 systemd[1]: Starting modprobe@loop.service... Jul 2 07:48:30.466803 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:48:30.466814 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:48:30.466826 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:48:30.466837 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:48:30.466848 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:48:30.466858 systemd[1]: Stopped systemd-journald.service. Jul 2 07:48:30.466867 kernel: loop: module loaded Jul 2 07:48:30.466877 kernel: fuse: init (API version 7.34) Jul 2 07:48:30.466887 systemd[1]: Starting systemd-journald.service... Jul 2 07:48:30.466897 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:48:30.466909 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:48:30.466919 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:48:30.466930 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:48:30.466940 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:48:30.466952 systemd-journald[988]: Journal started Jul 2 07:48:30.467025 systemd-journald[988]: Runtime Journal (/run/log/journal/8d8e79fe77734dd899fb541ccfcba420) is 6.0M, max 48.4M, 42.4M free. Jul 2 07:48:27.940000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:48:28.133000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:48:28.133000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:48:28.133000 audit: BPF prog-id=10 op=LOAD Jul 2 07:48:28.133000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:48:28.133000 audit: BPF prog-id=11 op=LOAD Jul 2 07:48:28.133000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:48:28.169000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:48:28.169000 audit[903]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:48:28.169000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:48:28.172000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:48:28.172000 audit[903]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b9 a2=1ed a3=0 items=2 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:48:28.172000 audit: CWD cwd="/" Jul 2 07:48:28.172000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:28.172000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:28.172000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:48:30.313000 audit: BPF prog-id=12 op=LOAD Jul 2 07:48:30.313000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:48:30.315000 audit: BPF prog-id=13 op=LOAD Jul 2 07:48:30.316000 audit: BPF prog-id=14 op=LOAD Jul 2 07:48:30.316000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:48:30.316000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:48:30.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.341000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:48:30.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.449000 audit: BPF prog-id=15 op=LOAD Jul 2 07:48:30.449000 audit: BPF prog-id=16 op=LOAD Jul 2 07:48:30.450000 audit: BPF prog-id=17 op=LOAD Jul 2 07:48:30.450000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:48:30.450000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:48:30.463000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:48:30.463000 audit[988]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff447310e0 a2=4000 a3=7fff4473117c items=0 ppid=1 pid=988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:48:30.463000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:48:28.168575 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:48:30.312154 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:48:28.169011 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:48:30.312165 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 07:48:28.169034 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:48:30.319150 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:48:28.169063 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:48:28.169072 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:48:28.169110 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:48:28.169125 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:48:28.169371 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:48:30.468743 systemd[1]: Stopped verity-setup.service. Jul 2 07:48:28.169418 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:48:28.169445 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:48:28.170128 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:48:28.170164 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:48:28.170180 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:48:28.170193 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:48:28.170209 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:48:28.170221 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:48:30.043853 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:30Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:48:30.044128 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:30Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:48:30.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.044232 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:30Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:48:30.044398 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:30Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:48:30.044442 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:30Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:48:30.044497 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2024-07-02T07:48:30Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:48:30.472006 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:48:30.473997 systemd[1]: Started systemd-journald.service. Jul 2 07:48:30.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.474786 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:48:30.475601 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:48:30.476390 systemd[1]: Mounted media.mount. Jul 2 07:48:30.477112 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:48:30.477953 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:48:30.478811 systemd[1]: Mounted tmp.mount. Jul 2 07:48:30.479675 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:48:30.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.480691 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:48:30.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.481691 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:48:30.481828 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:48:30.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.482840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:48:30.482962 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:48:30.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.483953 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:48:30.484089 systemd[1]: Finished modprobe@drm.service. Jul 2 07:48:30.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.485141 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:48:30.485258 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:48:30.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.486285 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:48:30.486414 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:48:30.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.487386 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:48:30.487509 systemd[1]: Finished modprobe@loop.service. Jul 2 07:48:30.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.488518 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:48:30.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.489572 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:48:30.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.490692 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:48:30.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.491868 systemd[1]: Reached target network-pre.target. Jul 2 07:48:30.493644 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:48:30.495415 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:48:30.496283 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:48:30.497853 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:48:30.500023 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:48:30.501176 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:48:30.502269 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:48:30.503391 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:48:30.504575 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:48:30.506730 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:48:30.508246 systemd-journald[988]: Time spent on flushing to /var/log/journal/8d8e79fe77734dd899fb541ccfcba420 is 27.656ms for 1139 entries. Jul 2 07:48:30.508246 systemd-journald[988]: System Journal (/var/log/journal/8d8e79fe77734dd899fb541ccfcba420) is 8.0M, max 195.6M, 187.6M free. Jul 2 07:48:30.547971 systemd-journald[988]: Received client request to flush runtime journal. Jul 2 07:48:30.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.511287 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:48:30.512538 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:48:30.514141 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:48:30.515471 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:48:30.531063 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:48:30.533660 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:48:30.537393 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:48:30.549376 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:48:30.551126 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:48:30.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.553884 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:48:30.555888 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:48:30.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.560942 udevadm[1011]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:48:30.929011 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:48:30.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.929000 audit: BPF prog-id=18 op=LOAD Jul 2 07:48:30.929000 audit: BPF prog-id=19 op=LOAD Jul 2 07:48:30.929000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:48:30.929000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:48:30.931306 systemd[1]: Starting systemd-udevd.service... Jul 2 07:48:30.947756 systemd-udevd[1012]: Using default interface naming scheme 'v252'. Jul 2 07:48:30.961017 systemd[1]: Started systemd-udevd.service. Jul 2 07:48:30.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:30.964000 audit: BPF prog-id=20 op=LOAD Jul 2 07:48:30.966008 systemd[1]: Starting systemd-networkd.service... Jul 2 07:48:30.975000 audit: BPF prog-id=21 op=LOAD Jul 2 07:48:30.975000 audit: BPF prog-id=22 op=LOAD Jul 2 07:48:30.975000 audit: BPF prog-id=23 op=LOAD Jul 2 07:48:30.977240 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:48:30.986474 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:48:31.009010 systemd[1]: Started systemd-userdbd.service. Jul 2 07:48:31.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.033013 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:48:31.042013 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:48:31.043942 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:48:31.053072 systemd-networkd[1031]: lo: Link UP Jul 2 07:48:31.053082 systemd-networkd[1031]: lo: Gained carrier Jul 2 07:48:31.053604 systemd-networkd[1031]: Enumeration completed Jul 2 07:48:31.053700 systemd-networkd[1031]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:48:31.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.053709 systemd[1]: Started systemd-networkd.service. Jul 2 07:48:31.056218 systemd-networkd[1031]: eth0: Link UP Jul 2 07:48:31.056224 systemd-networkd[1031]: eth0: Gained carrier Jul 2 07:48:31.044000 audit[1017]: AVC avc: denied { confidentiality } for pid=1017 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:48:31.044000 audit[1017]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558648cace90 a1=3207c a2=7f3f791a6bc5 a3=5 items=108 ppid=1012 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:48:31.044000 audit: CWD cwd="/" Jul 2 07:48:31.044000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=1 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=2 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=3 name=(null) inode=14208 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=4 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=5 name=(null) inode=14209 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=6 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=7 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=8 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=9 name=(null) inode=14211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=10 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=11 name=(null) inode=14212 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=12 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=13 name=(null) inode=14213 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=14 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=15 name=(null) inode=14214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=16 name=(null) inode=14210 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=17 name=(null) inode=14215 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=18 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=19 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=20 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=21 name=(null) inode=14217 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=22 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=23 name=(null) inode=14218 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=24 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=25 name=(null) inode=14219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=26 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=27 name=(null) inode=14220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=28 name=(null) inode=14216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=29 name=(null) inode=14221 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=30 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=31 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=32 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=33 name=(null) inode=14223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=34 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=35 name=(null) inode=14224 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=36 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=37 name=(null) inode=14225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=38 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=39 name=(null) inode=14226 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=40 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=41 name=(null) inode=14227 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=42 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=43 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=44 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=45 name=(null) inode=14229 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=46 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=47 name=(null) inode=14230 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=48 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=49 name=(null) inode=14231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=50 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=51 name=(null) inode=14232 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=52 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=53 name=(null) inode=14233 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=55 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=56 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=57 name=(null) inode=14235 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=58 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=59 name=(null) inode=14236 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=60 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=61 name=(null) inode=14237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=62 name=(null) inode=14237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=63 name=(null) inode=14238 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=64 name=(null) inode=14237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=65 name=(null) inode=14239 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=66 name=(null) inode=14237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=67 name=(null) inode=14240 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=68 name=(null) inode=14237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=69 name=(null) inode=14241 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=70 name=(null) inode=14237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=71 name=(null) inode=14242 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=72 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=73 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=74 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=75 name=(null) inode=14244 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=76 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=77 name=(null) inode=14245 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=78 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=79 name=(null) inode=14246 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=80 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=81 name=(null) inode=14247 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=82 name=(null) inode=14243 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=83 name=(null) inode=14248 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=84 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=85 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=86 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=87 name=(null) inode=14250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=88 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=89 name=(null) inode=14251 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=90 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=91 name=(null) inode=14252 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=92 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=93 name=(null) inode=14253 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=94 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=95 name=(null) inode=14254 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=96 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=97 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=98 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=99 name=(null) inode=14256 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=100 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=101 name=(null) inode=14257 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=102 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=103 name=(null) inode=14258 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=104 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=105 name=(null) inode=14259 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=106 name=(null) inode=14255 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PATH item=107 name=(null) inode=14260 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:48:31.044000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:48:31.064017 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jul 2 07:48:31.069143 systemd-networkd[1031]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:48:31.076024 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 07:48:31.093013 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:48:31.129440 kernel: kvm: Nested Virtualization enabled Jul 2 07:48:31.129512 kernel: SVM: kvm: Nested Paging enabled Jul 2 07:48:31.129527 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 2 07:48:31.129541 kernel: SVM: Virtual GIF supported Jul 2 07:48:31.144011 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:48:31.161316 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:48:31.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.163215 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:48:31.170269 lvm[1048]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:48:31.198653 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:48:31.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.199638 systemd[1]: Reached target cryptsetup.target. Jul 2 07:48:31.201200 systemd[1]: Starting lvm2-activation.service... Jul 2 07:48:31.205255 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:48:31.230123 systemd[1]: Finished lvm2-activation.service. Jul 2 07:48:31.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.231039 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:48:31.231877 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:48:31.231897 systemd[1]: Reached target local-fs.target. Jul 2 07:48:31.232685 systemd[1]: Reached target machines.target. Jul 2 07:48:31.234346 systemd[1]: Starting ldconfig.service... Jul 2 07:48:31.235234 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.235295 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:48:31.236180 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:48:31.237746 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:48:31.239862 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:48:31.242100 systemd[1]: Starting systemd-sysext.service... Jul 2 07:48:31.243253 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1051 (bootctl) Jul 2 07:48:31.244191 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:48:31.250277 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:48:31.253278 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:48:31.253427 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:48:31.264007 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 07:48:31.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.278674 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:48:31.284397 systemd-fsck[1061]: fsck.fat 4.2 (2021-01-31) Jul 2 07:48:31.284397 systemd-fsck[1061]: /dev/vda1: 790 files, 119261/258078 clusters Jul 2 07:48:31.281402 systemd[1]: Mounting boot.mount... Jul 2 07:48:31.282442 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:48:31.297328 systemd[1]: Mounted boot.mount. Jul 2 07:48:31.470036 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:48:31.469974 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:48:31.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.475745 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:48:31.476410 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:48:31.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.486016 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 07:48:31.490860 (sd-sysext)[1068]: Using extensions 'kubernetes'. Jul 2 07:48:31.491207 (sd-sysext)[1068]: Merged extensions into '/usr'. Jul 2 07:48:31.507891 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:48:31.509406 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:48:31.510632 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.512173 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:48:31.514397 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:48:31.516833 systemd[1]: Starting modprobe@loop.service... Jul 2 07:48:31.517809 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.518021 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:48:31.518188 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:48:31.521007 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:48:31.522298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:48:31.522451 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:48:31.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.523632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:48:31.523762 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:48:31.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.525108 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:48:31.525244 systemd[1]: Finished modprobe@loop.service. Jul 2 07:48:31.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.526590 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:48:31.526687 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.527760 systemd[1]: Finished systemd-sysext.service. Jul 2 07:48:31.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.530184 systemd[1]: Starting ensure-sysext.service... Jul 2 07:48:31.531151 ldconfig[1050]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:48:31.532157 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:48:31.538351 systemd[1]: Finished ldconfig.service. Jul 2 07:48:31.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.539346 systemd[1]: Reloading. Jul 2 07:48:31.542897 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:48:31.543568 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:48:31.544918 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:48:31.594554 /usr/lib/systemd/system-generators/torcx-generator[1095]: time="2024-07-02T07:48:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:48:31.594586 /usr/lib/systemd/system-generators/torcx-generator[1095]: time="2024-07-02T07:48:31Z" level=info msg="torcx already run" Jul 2 07:48:31.665239 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:48:31.665258 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:48:31.682382 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:48:31.733000 audit: BPF prog-id=24 op=LOAD Jul 2 07:48:31.733000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:48:31.733000 audit: BPF prog-id=25 op=LOAD Jul 2 07:48:31.733000 audit: BPF prog-id=26 op=LOAD Jul 2 07:48:31.733000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:48:31.733000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:48:31.734000 audit: BPF prog-id=27 op=LOAD Jul 2 07:48:31.734000 audit: BPF prog-id=28 op=LOAD Jul 2 07:48:31.734000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:48:31.734000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:48:31.736000 audit: BPF prog-id=29 op=LOAD Jul 2 07:48:31.736000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:48:31.736000 audit: BPF prog-id=30 op=LOAD Jul 2 07:48:31.736000 audit: BPF prog-id=31 op=LOAD Jul 2 07:48:31.736000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:48:31.736000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:48:31.737000 audit: BPF prog-id=32 op=LOAD Jul 2 07:48:31.737000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:48:31.740964 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:48:31.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.746155 systemd[1]: Starting audit-rules.service... Jul 2 07:48:31.748589 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:48:31.752000 audit: BPF prog-id=33 op=LOAD Jul 2 07:48:31.751111 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:48:31.753750 systemd[1]: Starting systemd-resolved.service... Jul 2 07:48:31.754000 audit: BPF prog-id=34 op=LOAD Jul 2 07:48:31.756102 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:48:31.758047 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:48:31.759574 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:48:31.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.760000 audit[1148]: SYSTEM_BOOT pid=1148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.767030 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:48:31.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:48:31.768665 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:48:31.768893 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.771151 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:48:31.773436 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:48:31.775443 systemd[1]: Starting modprobe@loop.service... Jul 2 07:48:31.776334 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.776444 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:48:31.778033 systemd[1]: Starting systemd-update-done.service... Jul 2 07:48:31.779000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:48:31.779000 audit[1161]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcca791740 a2=420 a3=0 items=0 ppid=1137 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:48:31.779000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:48:31.779030 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:48:31.780937 augenrules[1161]: No rules Jul 2 07:48:31.779133 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:48:31.780379 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:48:31.781774 systemd[1]: Finished audit-rules.service. Jul 2 07:48:31.782977 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:48:31.783130 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:48:31.784486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:48:31.784593 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:48:31.785824 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:48:31.785925 systemd[1]: Finished modprobe@loop.service. Jul 2 07:48:31.787113 systemd[1]: Finished systemd-update-done.service. Jul 2 07:48:31.790374 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:48:31.790549 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.791826 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:48:31.793874 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:48:31.795786 systemd[1]: Starting modprobe@loop.service... Jul 2 07:48:31.796598 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.796701 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:48:31.796780 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:48:31.796862 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:48:31.797870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:48:31.798017 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:48:31.799354 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:48:31.799505 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:48:31.800851 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:48:31.801004 systemd[1]: Finished modprobe@loop.service. Jul 2 07:48:31.805159 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:48:31.805367 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.806791 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:48:31.808902 systemd[1]: Starting modprobe@drm.service... Jul 2 07:48:31.810897 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:48:31.813250 systemd[1]: Starting modprobe@loop.service... Jul 2 07:48:31.814148 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.814296 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:48:31.815554 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:48:31.817678 systemd-resolved[1144]: Positive Trust Anchors: Jul 2 07:48:31.817888 systemd-resolved[1144]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:48:31.818001 systemd-resolved[1144]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:48:31.819110 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:48:31.819212 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:48:31.820338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:48:31.820460 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:48:31.821748 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:48:31.821855 systemd[1]: Finished modprobe@drm.service. Jul 2 07:48:31.823246 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:48:31.823373 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:48:31.824802 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:48:31.824908 systemd[1]: Finished modprobe@loop.service. Jul 2 07:48:31.826537 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:48:31.826669 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.827687 systemd[1]: Finished ensure-sysext.service. Jul 2 07:48:31.828305 systemd-resolved[1144]: Defaulting to hostname 'linux'. Jul 2 07:48:31.830183 systemd[1]: Started systemd-resolved.service. Jul 2 07:48:31.831221 systemd[1]: Reached target network.target. Jul 2 07:48:31.832157 systemd[1]: Reached target nss-lookup.target. Jul 2 07:48:31.844664 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:48:31.845726 systemd[1]: Reached target sysinit.target. Jul 2 07:48:31.846694 systemd[1]: Started motdgen.path. Jul 2 07:48:31.847760 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:48:31.849030 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:48:31.850015 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:48:31.850025 systemd-timesyncd[1147]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 07:48:31.850041 systemd[1]: Reached target paths.target. Jul 2 07:48:31.850866 systemd[1]: Reached target time-set.target. Jul 2 07:48:31.850875 systemd-timesyncd[1147]: Initial clock synchronization to Tue 2024-07-02 07:48:32.212538 UTC. Jul 2 07:48:31.851885 systemd[1]: Started logrotate.timer. Jul 2 07:48:31.852734 systemd[1]: Started mdadm.timer. Jul 2 07:48:31.853414 systemd[1]: Reached target timers.target. Jul 2 07:48:31.854483 systemd[1]: Listening on dbus.socket. Jul 2 07:48:31.856354 systemd[1]: Starting docker.socket... Jul 2 07:48:31.859916 systemd[1]: Listening on sshd.socket. Jul 2 07:48:31.860835 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:48:31.861241 systemd[1]: Listening on docker.socket. Jul 2 07:48:31.862118 systemd[1]: Reached target sockets.target. Jul 2 07:48:31.862970 systemd[1]: Reached target basic.target. Jul 2 07:48:31.863818 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.863843 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:48:31.864776 systemd[1]: Starting containerd.service... Jul 2 07:48:31.866793 systemd[1]: Starting dbus.service... Jul 2 07:48:31.869012 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:48:31.870910 systemd[1]: Starting extend-filesystems.service... Jul 2 07:48:31.871910 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:48:31.873042 systemd[1]: Starting motdgen.service... Jul 2 07:48:31.874864 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:48:31.874960 jq[1179]: false Jul 2 07:48:31.876783 systemd[1]: Starting sshd-keygen.service... Jul 2 07:48:31.879641 systemd[1]: Starting systemd-logind.service... Jul 2 07:48:31.880505 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:48:31.880573 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:48:31.880978 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:48:31.881676 systemd[1]: Starting update-engine.service... Jul 2 07:48:31.883337 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:48:31.885632 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:48:31.885788 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:48:31.886135 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:48:31.886265 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:48:31.887087 jq[1187]: true Jul 2 07:48:31.899403 jq[1192]: true Jul 2 07:48:31.898252 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:48:31.898540 systemd[1]: Finished motdgen.service. Jul 2 07:48:31.903611 dbus-daemon[1178]: [system] SELinux support is enabled Jul 2 07:48:31.903750 systemd[1]: Started dbus.service. Jul 2 07:48:31.906281 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:48:31.906312 systemd[1]: Reached target system-config.target. Jul 2 07:48:31.907346 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:48:31.907367 systemd[1]: Reached target user-config.target. Jul 2 07:48:31.911469 extend-filesystems[1180]: Found loop1 Jul 2 07:48:31.912571 extend-filesystems[1180]: Found sr0 Jul 2 07:48:31.912571 extend-filesystems[1180]: Found vda Jul 2 07:48:31.912571 extend-filesystems[1180]: Found vda1 Jul 2 07:48:31.912571 extend-filesystems[1180]: Found vda2 Jul 2 07:48:31.912571 extend-filesystems[1180]: Found vda3 Jul 2 07:48:31.912571 extend-filesystems[1180]: Found usr Jul 2 07:48:31.912571 extend-filesystems[1180]: Found vda4 Jul 2 07:48:31.912571 extend-filesystems[1180]: Found vda6 Jul 2 07:48:31.912571 extend-filesystems[1180]: Found vda7 Jul 2 07:48:31.912571 extend-filesystems[1180]: Found vda9 Jul 2 07:48:31.912571 extend-filesystems[1180]: Checking size of /dev/vda9 Jul 2 07:48:31.935925 extend-filesystems[1180]: Resized partition /dev/vda9 Jul 2 07:48:31.937171 env[1193]: time="2024-07-02T07:48:31.925755049Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:48:31.952236 env[1193]: time="2024-07-02T07:48:31.952190945Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:48:31.952486 env[1193]: time="2024-07-02T07:48:31.952467163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:48:31.953616 env[1193]: time="2024-07-02T07:48:31.953583016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:48:31.953706 env[1193]: time="2024-07-02T07:48:31.953686470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:48:31.954031 env[1193]: time="2024-07-02T07:48:31.954009676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:48:31.954121 env[1193]: time="2024-07-02T07:48:31.954099855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:48:31.954200 env[1193]: time="2024-07-02T07:48:31.954179865Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:48:31.954854 update_engine[1186]: I0702 07:48:31.925172 1186 main.cc:92] Flatcar Update Engine starting Jul 2 07:48:31.957097 extend-filesystems[1226]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:48:31.957798 systemd[1]: Started update-engine.service. Jul 2 07:48:31.959651 update_engine[1186]: I0702 07:48:31.957861 1186 update_check_scheduler.cc:74] Next update check in 9m37s Jul 2 07:48:31.964148 env[1193]: time="2024-07-02T07:48:31.963205562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:48:31.964148 env[1193]: time="2024-07-02T07:48:31.963373547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:48:31.964148 env[1193]: time="2024-07-02T07:48:31.963660274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:48:31.964148 env[1193]: time="2024-07-02T07:48:31.963852595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:48:31.964148 env[1193]: time="2024-07-02T07:48:31.963872212Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:48:31.964148 env[1193]: time="2024-07-02T07:48:31.963933827Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:48:31.964148 env[1193]: time="2024-07-02T07:48:31.963946481Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:48:31.964765 systemd[1]: Started locksmithd.service. Jul 2 07:48:31.967637 systemd-logind[1184]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:48:31.967662 systemd-logind[1184]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:48:31.967868 systemd-logind[1184]: New seat seat0. Jul 2 07:48:31.969634 systemd[1]: Started systemd-logind.service. Jul 2 07:48:31.985010 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 07:48:32.093021 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 07:48:32.144754 locksmithd[1231]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:48:32.945187 extend-filesystems[1226]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 07:48:32.945187 extend-filesystems[1226]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 07:48:32.945187 extend-filesystems[1226]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 07:48:32.630006 systemd-networkd[1031]: eth0: Gained IPv6LL Jul 2 07:48:32.950996 sshd_keygen[1200]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:48:32.951106 extend-filesystems[1180]: Resized filesystem in /dev/vda9 Jul 2 07:48:32.631959 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:48:32.954803 bash[1223]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951534624Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951584292Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951596567Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951644413Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951659767Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951674201Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951686539Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951698815Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951712064Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951725671Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951738270Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951750546Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951856501Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:48:32.954898 env[1193]: time="2024-07-02T07:48:32.951957817Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:48:32.633408 systemd[1]: Reached target network-online.target. Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952242597Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952273600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952288032Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952334684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952346426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952358282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952388939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952405322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952418551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952429213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952439404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952451606Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952588837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952603951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.955462 env[1193]: time="2024-07-02T07:48:32.952620029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.651614 systemd[1]: Starting kubelet.service... Jul 2 07:48:32.955862 env[1193]: time="2024-07-02T07:48:32.952642643Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:48:32.955862 env[1193]: time="2024-07-02T07:48:32.952679291Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:48:32.955862 env[1193]: time="2024-07-02T07:48:32.952691535Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:48:32.955862 env[1193]: time="2024-07-02T07:48:32.952709456Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:48:32.955862 env[1193]: time="2024-07-02T07:48:32.952742806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:48:32.950582 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:48:32.950751 systemd[1]: Finished extend-filesystems.service. Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.952915398Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.952963788Z" level=info msg="Connect containerd service" Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.952993000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.953502552Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.953697401Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.953740869Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.953773768Z" level=info msg="containerd successfully booted in 1.028564s" Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.955632126Z" level=info msg="Start subscribing containerd event" Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.955715647Z" level=info msg="Start recovering state" Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.955772301Z" level=info msg="Start event monitor" Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.955788096Z" level=info msg="Start snapshots syncer" Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.955808080Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:48:32.956042 env[1193]: time="2024-07-02T07:48:32.955818083Z" level=info msg="Start streaming server" Jul 2 07:48:32.952551 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:48:32.954269 systemd[1]: Started containerd.service. Jul 2 07:48:32.978879 systemd[1]: Finished sshd-keygen.service. Jul 2 07:48:32.981406 systemd[1]: Starting issuegen.service... Jul 2 07:48:32.986577 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:48:32.986716 systemd[1]: Finished issuegen.service. Jul 2 07:48:32.988858 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:48:32.996591 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:48:32.999090 systemd[1]: Started getty@tty1.service. Jul 2 07:48:33.001130 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:48:33.002280 systemd[1]: Reached target getty.target. Jul 2 07:48:33.804047 systemd[1]: Started kubelet.service. Jul 2 07:48:33.835919 systemd[1]: Reached target multi-user.target. Jul 2 07:48:33.838388 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:48:33.846171 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:48:33.846318 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:48:33.847456 systemd[1]: Startup finished in 674ms (kernel) + 4.151s (initrd) + 5.946s (userspace) = 10.772s. Jul 2 07:48:34.539867 kubelet[1255]: E0702 07:48:34.539772 1255 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:48:34.541951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:48:34.542086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:48:34.542323 systemd[1]: kubelet.service: Consumed 1.418s CPU time. Jul 2 07:48:37.510440 systemd[1]: Created slice system-sshd.slice. Jul 2 07:48:37.511369 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:44480.service. Jul 2 07:48:37.552762 sshd[1265]: Accepted publickey for core from 10.0.0.1 port 44480 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:48:37.554037 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:37.561830 systemd-logind[1184]: New session 1 of user core. Jul 2 07:48:37.562712 systemd[1]: Created slice user-500.slice. Jul 2 07:48:37.563691 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:48:37.570771 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:48:37.571879 systemd[1]: Starting user@500.service... Jul 2 07:48:37.574405 (systemd)[1268]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:37.639064 systemd[1268]: Queued start job for default target default.target. Jul 2 07:48:37.639479 systemd[1268]: Reached target paths.target. Jul 2 07:48:37.639498 systemd[1268]: Reached target sockets.target. Jul 2 07:48:37.639509 systemd[1268]: Reached target timers.target. Jul 2 07:48:37.639519 systemd[1268]: Reached target basic.target. Jul 2 07:48:37.639554 systemd[1268]: Reached target default.target. Jul 2 07:48:37.639575 systemd[1268]: Startup finished in 60ms. Jul 2 07:48:37.639652 systemd[1]: Started user@500.service. Jul 2 07:48:37.640737 systemd[1]: Started session-1.scope. Jul 2 07:48:37.692228 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:44486.service. Jul 2 07:48:37.735206 sshd[1277]: Accepted publickey for core from 10.0.0.1 port 44486 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:48:37.736455 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:37.740163 systemd-logind[1184]: New session 2 of user core. Jul 2 07:48:37.741503 systemd[1]: Started session-2.scope. Jul 2 07:48:37.794868 sshd[1277]: pam_unix(sshd:session): session closed for user core Jul 2 07:48:37.797615 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:44486.service: Deactivated successfully. Jul 2 07:48:37.798253 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:48:37.798805 systemd-logind[1184]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:48:37.799843 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:44490.service. Jul 2 07:48:37.800563 systemd-logind[1184]: Removed session 2. Jul 2 07:48:37.839904 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 44490 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:48:37.841141 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:37.844689 systemd-logind[1184]: New session 3 of user core. Jul 2 07:48:37.845770 systemd[1]: Started session-3.scope. Jul 2 07:48:37.895433 sshd[1283]: pam_unix(sshd:session): session closed for user core Jul 2 07:48:37.898248 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:44490.service: Deactivated successfully. Jul 2 07:48:37.898833 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:48:37.899359 systemd-logind[1184]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:48:37.900485 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:44496.service. Jul 2 07:48:37.901202 systemd-logind[1184]: Removed session 3. Jul 2 07:48:37.938588 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 44496 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:48:37.939533 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:37.942851 systemd-logind[1184]: New session 4 of user core. Jul 2 07:48:37.943713 systemd[1]: Started session-4.scope. Jul 2 07:48:37.997642 sshd[1290]: pam_unix(sshd:session): session closed for user core Jul 2 07:48:38.000232 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:44496.service: Deactivated successfully. Jul 2 07:48:38.000791 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:48:38.001324 systemd-logind[1184]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:48:38.002492 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:44502.service. Jul 2 07:48:38.003284 systemd-logind[1184]: Removed session 4. Jul 2 07:48:38.040934 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 44502 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:48:38.042111 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:48:38.045063 systemd-logind[1184]: New session 5 of user core. Jul 2 07:48:38.045945 systemd[1]: Started session-5.scope. Jul 2 07:48:38.103229 sudo[1299]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:48:38.103499 sudo[1299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:48:38.114455 systemd[1]: Starting coreos-metadata.service... Jul 2 07:48:38.120531 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 07:48:38.120669 systemd[1]: Finished coreos-metadata.service. Jul 2 07:48:38.851196 systemd[1]: Stopped kubelet.service. Jul 2 07:48:38.851343 systemd[1]: kubelet.service: Consumed 1.418s CPU time. Jul 2 07:48:38.853236 systemd[1]: Starting kubelet.service... Jul 2 07:48:38.868019 systemd[1]: Reloading. Jul 2 07:48:38.978945 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2024-07-02T07:48:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:48:38.979396 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2024-07-02T07:48:38Z" level=info msg="torcx already run" Jul 2 07:48:39.797668 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:48:39.797686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:48:39.814960 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:48:39.893738 systemd[1]: Started kubelet.service. Jul 2 07:48:39.895243 systemd[1]: Stopping kubelet.service... Jul 2 07:48:39.895584 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:48:39.895788 systemd[1]: Stopped kubelet.service. Jul 2 07:48:39.897444 systemd[1]: Starting kubelet.service... Jul 2 07:48:39.978793 systemd[1]: Started kubelet.service. Jul 2 07:48:40.068439 kubelet[1413]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:48:40.068439 kubelet[1413]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:48:40.068439 kubelet[1413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:48:40.068439 kubelet[1413]: I0702 07:48:40.068349 1413 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:48:40.193733 kubelet[1413]: I0702 07:48:40.193669 1413 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:48:40.193733 kubelet[1413]: I0702 07:48:40.193712 1413 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:48:40.194018 kubelet[1413]: I0702 07:48:40.193983 1413 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:48:40.209535 kubelet[1413]: I0702 07:48:40.209502 1413 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:48:40.229262 kubelet[1413]: I0702 07:48:40.229233 1413 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:48:40.231360 kubelet[1413]: I0702 07:48:40.231328 1413 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:48:40.231529 kubelet[1413]: I0702 07:48:40.231509 1413 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:48:40.231987 kubelet[1413]: I0702 07:48:40.231960 1413 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:48:40.231987 kubelet[1413]: I0702 07:48:40.231976 1413 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:48:40.232664 kubelet[1413]: I0702 07:48:40.232633 1413 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:48:40.233842 kubelet[1413]: I0702 07:48:40.233814 1413 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:48:40.233842 kubelet[1413]: I0702 07:48:40.233836 1413 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:48:40.233926 kubelet[1413]: I0702 07:48:40.233860 1413 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:48:40.233926 kubelet[1413]: I0702 07:48:40.233872 1413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:48:40.234058 kubelet[1413]: E0702 07:48:40.234022 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:40.234112 kubelet[1413]: E0702 07:48:40.234062 1413 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:40.235722 kubelet[1413]: I0702 07:48:40.235698 1413 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:48:40.240669 kubelet[1413]: W0702 07:48:40.240621 1413 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:48:40.241203 kubelet[1413]: I0702 07:48:40.241184 1413 server.go:1232] "Started kubelet" Jul 2 07:48:40.243509 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:48:40.247124 kubelet[1413]: E0702 07:48:40.247090 1413 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:48:40.247124 kubelet[1413]: E0702 07:48:40.247118 1413 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:48:40.247380 kubelet[1413]: I0702 07:48:40.247358 1413 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:48:40.248117 kubelet[1413]: I0702 07:48:40.248075 1413 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:48:40.248856 kubelet[1413]: I0702 07:48:40.248828 1413 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:48:40.249033 kubelet[1413]: I0702 07:48:40.249022 1413 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:48:40.249381 kubelet[1413]: W0702 07:48:40.249359 1413 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.92" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 07:48:40.249412 kubelet[1413]: E0702 07:48:40.249382 1413 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.92" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 07:48:40.249412 kubelet[1413]: W0702 07:48:40.249404 1413 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 07:48:40.249412 kubelet[1413]: E0702 07:48:40.249412 1413 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 07:48:40.254626 kubelet[1413]: E0702 07:48:40.254521 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de21da3a1f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 241158687, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 241158687, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.255372 kubelet[1413]: E0702 07:48:40.255317 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de22350c6c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 247110764, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 247110764, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.260378 kubelet[1413]: I0702 07:48:40.260328 1413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:48:40.260559 kubelet[1413]: I0702 07:48:40.260454 1413 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:48:40.260720 kubelet[1413]: I0702 07:48:40.260700 1413 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:48:40.260862 kubelet[1413]: I0702 07:48:40.260839 1413 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:48:40.261611 kubelet[1413]: E0702 07:48:40.261590 1413 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.92\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jul 2 07:48:40.261873 kubelet[1413]: W0702 07:48:40.261822 1413 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 2 07:48:40.261873 kubelet[1413]: E0702 07:48:40.261872 1413 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 2 07:48:40.282349 kubelet[1413]: I0702 07:48:40.282313 1413 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:48:40.282349 kubelet[1413]: I0702 07:48:40.282339 1413 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:48:40.282349 kubelet[1413]: I0702 07:48:40.282355 1413 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:48:40.283077 kubelet[1413]: E0702 07:48:40.282980 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de24465562", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.92 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281797986, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281797986, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.283933 kubelet[1413]: E0702 07:48:40.283858 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de244683ee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.92 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281809902, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281809902, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.284541 kubelet[1413]: E0702 07:48:40.284486 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de24469270", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.92 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281813616, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281813616, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.362235 kubelet[1413]: I0702 07:48:40.362192 1413 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.92" Jul 2 07:48:40.363388 kubelet[1413]: E0702 07:48:40.363345 1413 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.92" Jul 2 07:48:40.363682 kubelet[1413]: E0702 07:48:40.363611 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de24465562", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.92 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281797986, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 362131960, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events "10.0.0.92.17de55de24465562" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.364489 kubelet[1413]: E0702 07:48:40.364412 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de244683ee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.92 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281809902, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 362146734, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events "10.0.0.92.17de55de244683ee" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.365371 kubelet[1413]: E0702 07:48:40.365274 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de24469270", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.92 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281813616, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 362150886, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events "10.0.0.92.17de55de24469270" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.463696 kubelet[1413]: E0702 07:48:40.463653 1413 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.92\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jul 2 07:48:40.564401 kubelet[1413]: I0702 07:48:40.564371 1413 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.92" Jul 2 07:48:40.565618 kubelet[1413]: E0702 07:48:40.565542 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de24465562", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.92 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281797986, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 564309247, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events "10.0.0.92.17de55de24465562" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.565618 kubelet[1413]: E0702 07:48:40.565592 1413 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.92" Jul 2 07:48:40.566488 kubelet[1413]: E0702 07:48:40.566391 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de244683ee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.92 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281809902, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 564320613, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events "10.0.0.92.17de55de244683ee" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.567334 kubelet[1413]: E0702 07:48:40.567288 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de24469270", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.92 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281813616, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 564323982, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events "10.0.0.92.17de55de24469270" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.865750 kubelet[1413]: E0702 07:48:40.865704 1413 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.92\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Jul 2 07:48:40.957873 kubelet[1413]: I0702 07:48:40.957809 1413 policy_none.go:49] "None policy: Start" Jul 2 07:48:40.961197 kubelet[1413]: I0702 07:48:40.961169 1413 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:48:40.961312 kubelet[1413]: I0702 07:48:40.961281 1413 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:48:40.967223 kubelet[1413]: I0702 07:48:40.967184 1413 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.92" Jul 2 07:48:40.968756 kubelet[1413]: E0702 07:48:40.968667 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de24465562", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.92 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281797986, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 967122288, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events "10.0.0.92.17de55de24465562" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.968917 kubelet[1413]: E0702 07:48:40.968887 1413 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.92" Jul 2 07:48:40.969872 kubelet[1413]: E0702 07:48:40.969760 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de244683ee", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.92 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281809902, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 967134682, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events "10.0.0.92.17de55de244683ee" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:40.970850 kubelet[1413]: E0702 07:48:40.970787 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de24469270", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.92 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 281813616, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 40, 967137857, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events "10.0.0.92.17de55de24469270" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:41.018600 kubelet[1413]: I0702 07:48:41.018546 1413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:48:41.019622 kubelet[1413]: I0702 07:48:41.019579 1413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:48:41.019622 kubelet[1413]: I0702 07:48:41.019625 1413 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:48:41.019735 kubelet[1413]: I0702 07:48:41.019655 1413 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:48:41.019735 kubelet[1413]: E0702 07:48:41.019725 1413 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:48:41.021935 kubelet[1413]: W0702 07:48:41.021884 1413 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jul 2 07:48:41.022011 kubelet[1413]: E0702 07:48:41.021951 1413 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jul 2 07:48:41.049504 systemd[1]: Created slice kubepods.slice. Jul 2 07:48:41.053897 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:48:41.057200 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:48:41.063680 kubelet[1413]: I0702 07:48:41.063649 1413 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:48:41.063963 kubelet[1413]: I0702 07:48:41.063947 1413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:48:41.066376 kubelet[1413]: E0702 07:48:41.066309 1413 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.92\" not found" Jul 2 07:48:41.069115 kubelet[1413]: E0702 07:48:41.069023 1413 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.92.17de55de531e2494", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.92", UID:"10.0.0.92", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.92"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 48, 41, 67693204, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 48, 41, 67693204, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.92"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:48:41.111361 kubelet[1413]: W0702 07:48:41.111315 1413 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.92" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 07:48:41.111361 kubelet[1413]: E0702 07:48:41.111353 1413 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.92" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 07:48:41.144735 kubelet[1413]: W0702 07:48:41.144561 1413 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 07:48:41.144735 kubelet[1413]: E0702 07:48:41.144606 1413 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 07:48:41.200709 kubelet[1413]: I0702 07:48:41.200588 1413 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 2 07:48:41.234800 kubelet[1413]: E0702 07:48:41.234746 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:41.604279 kubelet[1413]: E0702 07:48:41.604231 1413 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.92" not found Jul 2 07:48:41.670184 kubelet[1413]: E0702 07:48:41.670139 1413 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.92\" not found" node="10.0.0.92" Jul 2 07:48:41.770183 kubelet[1413]: I0702 07:48:41.770122 1413 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.92" Jul 2 07:48:41.777145 kubelet[1413]: I0702 07:48:41.777105 1413 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.92" Jul 2 07:48:41.809668 kubelet[1413]: E0702 07:48:41.809640 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:41.910497 kubelet[1413]: E0702 07:48:41.910355 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:42.011042 kubelet[1413]: E0702 07:48:42.010958 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:42.111931 kubelet[1413]: E0702 07:48:42.111854 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:42.212871 kubelet[1413]: E0702 07:48:42.212710 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:42.235131 kubelet[1413]: E0702 07:48:42.235087 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:42.313541 kubelet[1413]: E0702 07:48:42.313479 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:42.401046 sudo[1299]: pam_unix(sudo:session): session closed for user root Jul 2 07:48:42.402592 sshd[1296]: pam_unix(sshd:session): session closed for user core Jul 2 07:48:42.405239 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:44502.service: Deactivated successfully. Jul 2 07:48:42.406043 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:48:42.406681 systemd-logind[1184]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:48:42.407377 systemd-logind[1184]: Removed session 5. Jul 2 07:48:42.413622 kubelet[1413]: E0702 07:48:42.413585 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:42.514161 kubelet[1413]: E0702 07:48:42.513982 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:42.614199 kubelet[1413]: E0702 07:48:42.614127 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:42.715095 kubelet[1413]: E0702 07:48:42.715024 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:42.815962 kubelet[1413]: E0702 07:48:42.815796 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:42.916526 kubelet[1413]: E0702 07:48:42.916455 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:43.016693 kubelet[1413]: E0702 07:48:43.016636 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:43.117923 kubelet[1413]: E0702 07:48:43.117846 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:43.218486 kubelet[1413]: E0702 07:48:43.218417 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:43.235935 kubelet[1413]: E0702 07:48:43.235875 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:43.319493 kubelet[1413]: E0702 07:48:43.319401 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:43.419969 kubelet[1413]: E0702 07:48:43.419788 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:43.520566 kubelet[1413]: E0702 07:48:43.520474 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:43.621738 kubelet[1413]: E0702 07:48:43.621638 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:43.722422 kubelet[1413]: E0702 07:48:43.722315 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:43.823066 kubelet[1413]: E0702 07:48:43.823018 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:43.924241 kubelet[1413]: E0702 07:48:43.924176 1413 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 2 07:48:44.025278 kubelet[1413]: I0702 07:48:44.025163 1413 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 2 07:48:44.025535 env[1193]: time="2024-07-02T07:48:44.025477911Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:48:44.025801 kubelet[1413]: I0702 07:48:44.025697 1413 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 2 07:48:44.236596 kubelet[1413]: I0702 07:48:44.236561 1413 apiserver.go:52] "Watching apiserver" Jul 2 07:48:44.237034 kubelet[1413]: E0702 07:48:44.236570 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:44.242274 kubelet[1413]: I0702 07:48:44.242245 1413 topology_manager.go:215] "Topology Admit Handler" podUID="fe1067c4-70cd-465a-a499-32c10f41faf7" podNamespace="kube-system" podName="cilium-qg82r" Jul 2 07:48:44.242441 kubelet[1413]: I0702 07:48:44.242415 1413 topology_manager.go:215] "Topology Admit Handler" podUID="745348b4-c9b0-44a8-8a48-cb73c2d9f636" podNamespace="kube-system" podName="kube-proxy-z2cwl" Jul 2 07:48:44.248172 systemd[1]: Created slice kubepods-besteffort-pod745348b4_c9b0_44a8_8a48_cb73c2d9f636.slice. Jul 2 07:48:44.256852 systemd[1]: Created slice kubepods-burstable-podfe1067c4_70cd_465a_a499_32c10f41faf7.slice. Jul 2 07:48:44.261318 kubelet[1413]: I0702 07:48:44.261291 1413 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:48:44.283945 kubelet[1413]: I0702 07:48:44.283783 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/745348b4-c9b0-44a8-8a48-cb73c2d9f636-xtables-lock\") pod \"kube-proxy-z2cwl\" (UID: \"745348b4-c9b0-44a8-8a48-cb73c2d9f636\") " pod="kube-system/kube-proxy-z2cwl" Jul 2 07:48:44.283945 kubelet[1413]: I0702 07:48:44.283895 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/745348b4-c9b0-44a8-8a48-cb73c2d9f636-lib-modules\") pod \"kube-proxy-z2cwl\" (UID: \"745348b4-c9b0-44a8-8a48-cb73c2d9f636\") " pod="kube-system/kube-proxy-z2cwl" Jul 2 07:48:44.283945 kubelet[1413]: I0702 07:48:44.283924 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-run\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.283945 kubelet[1413]: I0702 07:48:44.283948 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-hostproc\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284204 kubelet[1413]: I0702 07:48:44.283972 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-cgroup\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284204 kubelet[1413]: I0702 07:48:44.284012 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-etc-cni-netd\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284204 kubelet[1413]: I0702 07:48:44.284053 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-xtables-lock\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284204 kubelet[1413]: I0702 07:48:44.284114 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/745348b4-c9b0-44a8-8a48-cb73c2d9f636-kube-proxy\") pod \"kube-proxy-z2cwl\" (UID: \"745348b4-c9b0-44a8-8a48-cb73c2d9f636\") " pod="kube-system/kube-proxy-z2cwl" Jul 2 07:48:44.284204 kubelet[1413]: I0702 07:48:44.284152 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-bpf-maps\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284204 kubelet[1413]: I0702 07:48:44.284192 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe1067c4-70cd-465a-a499-32c10f41faf7-clustermesh-secrets\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284519 kubelet[1413]: I0702 07:48:44.284229 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-config-path\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284519 kubelet[1413]: I0702 07:48:44.284258 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-host-proc-sys-net\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284519 kubelet[1413]: I0702 07:48:44.284287 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q8hj\" (UniqueName: \"kubernetes.io/projected/fe1067c4-70cd-465a-a499-32c10f41faf7-kube-api-access-9q8hj\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284519 kubelet[1413]: I0702 07:48:44.284312 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw8sq\" (UniqueName: \"kubernetes.io/projected/745348b4-c9b0-44a8-8a48-cb73c2d9f636-kube-api-access-pw8sq\") pod \"kube-proxy-z2cwl\" (UID: \"745348b4-c9b0-44a8-8a48-cb73c2d9f636\") " pod="kube-system/kube-proxy-z2cwl" Jul 2 07:48:44.284519 kubelet[1413]: I0702 07:48:44.284357 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cni-path\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284755 kubelet[1413]: I0702 07:48:44.284393 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-lib-modules\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284755 kubelet[1413]: I0702 07:48:44.284425 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-host-proc-sys-kernel\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.284755 kubelet[1413]: I0702 07:48:44.284471 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe1067c4-70cd-465a-a499-32c10f41faf7-hubble-tls\") pod \"cilium-qg82r\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " pod="kube-system/cilium-qg82r" Jul 2 07:48:44.555933 kubelet[1413]: E0702 07:48:44.555606 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:44.556339 env[1193]: time="2024-07-02T07:48:44.556306013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2cwl,Uid:745348b4-c9b0-44a8-8a48-cb73c2d9f636,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:44.567865 kubelet[1413]: E0702 07:48:44.567826 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:44.568306 env[1193]: time="2024-07-02T07:48:44.568270440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qg82r,Uid:fe1067c4-70cd-465a-a499-32c10f41faf7,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:45.229551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1005477924.mount: Deactivated successfully. Jul 2 07:48:45.237261 kubelet[1413]: E0702 07:48:45.237227 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:45.237491 env[1193]: time="2024-07-02T07:48:45.237296353Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:45.238184 env[1193]: time="2024-07-02T07:48:45.238154245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:45.241086 env[1193]: time="2024-07-02T07:48:45.241037840Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:45.242385 env[1193]: time="2024-07-02T07:48:45.242357773Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:45.243747 env[1193]: time="2024-07-02T07:48:45.243702024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:45.245110 env[1193]: time="2024-07-02T07:48:45.245069179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:45.246460 env[1193]: time="2024-07-02T07:48:45.246430236Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:45.247079 env[1193]: time="2024-07-02T07:48:45.247045175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:45.272224 env[1193]: time="2024-07-02T07:48:45.272133153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:45.272224 env[1193]: time="2024-07-02T07:48:45.272187778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:45.272224 env[1193]: time="2024-07-02T07:48:45.272200231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:45.272507 env[1193]: time="2024-07-02T07:48:45.272475227Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7 pid=1475 runtime=io.containerd.runc.v2 Jul 2 07:48:45.275206 env[1193]: time="2024-07-02T07:48:45.275137765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:45.275206 env[1193]: time="2024-07-02T07:48:45.275169364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:45.275206 env[1193]: time="2024-07-02T07:48:45.275178787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:45.276904 env[1193]: time="2024-07-02T07:48:45.275339351Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3df0a3e6a4549d86c70b6f652d762776a56be485e4661844eec87446211364f pid=1471 runtime=io.containerd.runc.v2 Jul 2 07:48:45.290201 systemd[1]: Started cri-containerd-a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7.scope. Jul 2 07:48:45.295074 systemd[1]: Started cri-containerd-c3df0a3e6a4549d86c70b6f652d762776a56be485e4661844eec87446211364f.scope. Jul 2 07:48:45.454975 env[1193]: time="2024-07-02T07:48:45.454917691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qg82r,Uid:fe1067c4-70cd-465a-a499-32c10f41faf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\"" Jul 2 07:48:45.457033 kubelet[1413]: E0702 07:48:45.456392 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:45.457860 env[1193]: time="2024-07-02T07:48:45.457818656Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:48:45.463624 env[1193]: time="2024-07-02T07:48:45.463593904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2cwl,Uid:745348b4-c9b0-44a8-8a48-cb73c2d9f636,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3df0a3e6a4549d86c70b6f652d762776a56be485e4661844eec87446211364f\"" Jul 2 07:48:45.464344 kubelet[1413]: E0702 07:48:45.464304 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:46.238794 kubelet[1413]: E0702 07:48:46.238731 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:47.239528 kubelet[1413]: E0702 07:48:47.239462 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:48.239706 kubelet[1413]: E0702 07:48:48.239640 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:49.240156 kubelet[1413]: E0702 07:48:49.240111 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:49.683915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652918658.mount: Deactivated successfully. Jul 2 07:48:50.240292 kubelet[1413]: E0702 07:48:50.240226 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:51.240460 kubelet[1413]: E0702 07:48:51.240405 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:52.240596 kubelet[1413]: E0702 07:48:52.240528 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:53.241557 kubelet[1413]: E0702 07:48:53.241481 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:53.472372 env[1193]: time="2024-07-02T07:48:53.472307388Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:53.474638 env[1193]: time="2024-07-02T07:48:53.474591176Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:53.476519 env[1193]: time="2024-07-02T07:48:53.476490490Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:53.477192 env[1193]: time="2024-07-02T07:48:53.477135678Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:48:53.478399 env[1193]: time="2024-07-02T07:48:53.478350212Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 07:48:53.479391 env[1193]: time="2024-07-02T07:48:53.479346470Z" level=info msg="CreateContainer within sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:48:53.494834 env[1193]: time="2024-07-02T07:48:53.494715610Z" level=info msg="CreateContainer within sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36\"" Jul 2 07:48:53.495463 env[1193]: time="2024-07-02T07:48:53.495412435Z" level=info msg="StartContainer for \"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36\"" Jul 2 07:48:53.513385 systemd[1]: run-containerd-runc-k8s.io-07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36-runc.X6bFed.mount: Deactivated successfully. Jul 2 07:48:53.514816 systemd[1]: Started cri-containerd-07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36.scope. Jul 2 07:48:53.538516 env[1193]: time="2024-07-02T07:48:53.538451700Z" level=info msg="StartContainer for \"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36\" returns successfully" Jul 2 07:48:53.545310 systemd[1]: cri-containerd-07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36.scope: Deactivated successfully. Jul 2 07:48:54.047508 kubelet[1413]: E0702 07:48:54.047474 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:54.241826 kubelet[1413]: E0702 07:48:54.241794 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:54.318754 env[1193]: time="2024-07-02T07:48:54.318637987Z" level=info msg="shim disconnected" id=07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36 Jul 2 07:48:54.318754 env[1193]: time="2024-07-02T07:48:54.318688353Z" level=warning msg="cleaning up after shim disconnected" id=07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36 namespace=k8s.io Jul 2 07:48:54.318754 env[1193]: time="2024-07-02T07:48:54.318699380Z" level=info msg="cleaning up dead shim" Jul 2 07:48:54.325419 env[1193]: time="2024-07-02T07:48:54.325376039Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1592 runtime=io.containerd.runc.v2\n" Jul 2 07:48:54.489502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36-rootfs.mount: Deactivated successfully. Jul 2 07:48:55.049756 kubelet[1413]: E0702 07:48:55.049722 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:55.051416 env[1193]: time="2024-07-02T07:48:55.051377570Z" level=info msg="CreateContainer within sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:48:55.068751 env[1193]: time="2024-07-02T07:48:55.068696965Z" level=info msg="CreateContainer within sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3\"" Jul 2 07:48:55.069226 env[1193]: time="2024-07-02T07:48:55.069160406Z" level=info msg="StartContainer for \"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3\"" Jul 2 07:48:55.088853 systemd[1]: Started cri-containerd-743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3.scope. Jul 2 07:48:55.118186 env[1193]: time="2024-07-02T07:48:55.118062446Z" level=info msg="StartContainer for \"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3\" returns successfully" Jul 2 07:48:55.120580 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:48:55.120814 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:48:55.121291 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:48:55.122675 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:48:55.122922 systemd[1]: cri-containerd-743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3.scope: Deactivated successfully. Jul 2 07:48:55.131164 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:48:55.226816 env[1193]: time="2024-07-02T07:48:55.226738709Z" level=info msg="shim disconnected" id=743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3 Jul 2 07:48:55.226816 env[1193]: time="2024-07-02T07:48:55.226794470Z" level=warning msg="cleaning up after shim disconnected" id=743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3 namespace=k8s.io Jul 2 07:48:55.226816 env[1193]: time="2024-07-02T07:48:55.226803948Z" level=info msg="cleaning up dead shim" Jul 2 07:48:55.233265 env[1193]: time="2024-07-02T07:48:55.233210306Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1657 runtime=io.containerd.runc.v2\n" Jul 2 07:48:55.242181 kubelet[1413]: E0702 07:48:55.242145 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:55.488780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3-rootfs.mount: Deactivated successfully. Jul 2 07:48:55.488884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3648623987.mount: Deactivated successfully. Jul 2 07:48:55.834429 env[1193]: time="2024-07-02T07:48:55.834299041Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:55.836121 env[1193]: time="2024-07-02T07:48:55.836068733Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:55.837469 env[1193]: time="2024-07-02T07:48:55.837434611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:55.838697 env[1193]: time="2024-07-02T07:48:55.838652682Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:55.839042 env[1193]: time="2024-07-02T07:48:55.838979952Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 07:48:55.840847 env[1193]: time="2024-07-02T07:48:55.840820044Z" level=info msg="CreateContainer within sandbox \"c3df0a3e6a4549d86c70b6f652d762776a56be485e4661844eec87446211364f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:48:55.854087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205487632.mount: Deactivated successfully. Jul 2 07:48:55.858575 env[1193]: time="2024-07-02T07:48:55.858511658Z" level=info msg="CreateContainer within sandbox \"c3df0a3e6a4549d86c70b6f652d762776a56be485e4661844eec87446211364f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"34c8423d862f7dcb6bd60e14b0541e4e2bcfd59cf7fc5a73920997c53a6fdd73\"" Jul 2 07:48:55.859254 env[1193]: time="2024-07-02T07:48:55.859226378Z" level=info msg="StartContainer for \"34c8423d862f7dcb6bd60e14b0541e4e2bcfd59cf7fc5a73920997c53a6fdd73\"" Jul 2 07:48:55.875562 systemd[1]: Started cri-containerd-34c8423d862f7dcb6bd60e14b0541e4e2bcfd59cf7fc5a73920997c53a6fdd73.scope. Jul 2 07:48:55.900548 env[1193]: time="2024-07-02T07:48:55.900499953Z" level=info msg="StartContainer for \"34c8423d862f7dcb6bd60e14b0541e4e2bcfd59cf7fc5a73920997c53a6fdd73\" returns successfully" Jul 2 07:48:56.052617 kubelet[1413]: E0702 07:48:56.052573 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:56.053451 kubelet[1413]: E0702 07:48:56.053426 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:56.054691 env[1193]: time="2024-07-02T07:48:56.054642140Z" level=info msg="CreateContainer within sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:48:56.073382 env[1193]: time="2024-07-02T07:48:56.073324970Z" level=info msg="CreateContainer within sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8\"" Jul 2 07:48:56.073903 env[1193]: time="2024-07-02T07:48:56.073875374Z" level=info msg="StartContainer for \"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8\"" Jul 2 07:48:56.089961 systemd[1]: Started cri-containerd-89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8.scope. Jul 2 07:48:56.116818 systemd[1]: cri-containerd-89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8.scope: Deactivated successfully. Jul 2 07:48:56.117020 env[1193]: time="2024-07-02T07:48:56.116842228Z" level=info msg="StartContainer for \"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8\" returns successfully" Jul 2 07:48:56.242491 kubelet[1413]: E0702 07:48:56.242433 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:56.410213 env[1193]: time="2024-07-02T07:48:56.410157179Z" level=info msg="shim disconnected" id=89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8 Jul 2 07:48:56.410213 env[1193]: time="2024-07-02T07:48:56.410210557Z" level=warning msg="cleaning up after shim disconnected" id=89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8 namespace=k8s.io Jul 2 07:48:56.410213 env[1193]: time="2024-07-02T07:48:56.410221889Z" level=info msg="cleaning up dead shim" Jul 2 07:48:56.416871 env[1193]: time="2024-07-02T07:48:56.416823685Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1876 runtime=io.containerd.runc.v2\n" Jul 2 07:48:57.056332 kubelet[1413]: E0702 07:48:57.056297 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:57.056332 kubelet[1413]: E0702 07:48:57.056343 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:57.057948 env[1193]: time="2024-07-02T07:48:57.057906468Z" level=info msg="CreateContainer within sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:48:57.067357 kubelet[1413]: I0702 07:48:57.067325 1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-z2cwl" podStartSLOduration=5.69280762 podCreationTimestamp="2024-07-02 07:48:41 +0000 UTC" firstStartedPulling="2024-07-02 07:48:45.464853556 +0000 UTC m=+5.478122050" lastFinishedPulling="2024-07-02 07:48:55.839312644 +0000 UTC m=+15.852581148" observedRunningTime="2024-07-02 07:48:56.070335605 +0000 UTC m=+16.083604099" watchObservedRunningTime="2024-07-02 07:48:57.067266718 +0000 UTC m=+17.080535212" Jul 2 07:48:57.072503 env[1193]: time="2024-07-02T07:48:57.072450090Z" level=info msg="CreateContainer within sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590\"" Jul 2 07:48:57.073076 env[1193]: time="2024-07-02T07:48:57.073025896Z" level=info msg="StartContainer for \"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590\"" Jul 2 07:48:57.087583 systemd[1]: Started cri-containerd-18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590.scope. Jul 2 07:48:57.106315 systemd[1]: cri-containerd-18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590.scope: Deactivated successfully. Jul 2 07:48:57.107075 env[1193]: time="2024-07-02T07:48:57.106809508Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe1067c4_70cd_465a_a499_32c10f41faf7.slice/cri-containerd-18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590.scope/memory.events\": no such file or directory" Jul 2 07:48:57.109961 env[1193]: time="2024-07-02T07:48:57.109912876Z" level=info msg="StartContainer for \"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590\" returns successfully" Jul 2 07:48:57.128473 env[1193]: time="2024-07-02T07:48:57.128417274Z" level=info msg="shim disconnected" id=18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590 Jul 2 07:48:57.128473 env[1193]: time="2024-07-02T07:48:57.128454493Z" level=warning msg="cleaning up after shim disconnected" id=18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590 namespace=k8s.io Jul 2 07:48:57.128473 env[1193]: time="2024-07-02T07:48:57.128462882Z" level=info msg="cleaning up dead shim" Jul 2 07:48:57.134687 env[1193]: time="2024-07-02T07:48:57.134631898Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1931 runtime=io.containerd.runc.v2\n" Jul 2 07:48:57.242612 kubelet[1413]: E0702 07:48:57.242567 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:57.489013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590-rootfs.mount: Deactivated successfully. Jul 2 07:48:58.061449 kubelet[1413]: E0702 07:48:58.061367 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:58.063778 env[1193]: time="2024-07-02T07:48:58.063724875Z" level=info msg="CreateContainer within sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:48:58.081912 env[1193]: time="2024-07-02T07:48:58.081838989Z" level=info msg="CreateContainer within sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\"" Jul 2 07:48:58.082489 env[1193]: time="2024-07-02T07:48:58.082426257Z" level=info msg="StartContainer for \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\"" Jul 2 07:48:58.100564 systemd[1]: Started cri-containerd-83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312.scope. Jul 2 07:48:58.125690 env[1193]: time="2024-07-02T07:48:58.125617648Z" level=info msg="StartContainer for \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\" returns successfully" Jul 2 07:48:58.243567 kubelet[1413]: E0702 07:48:58.243505 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:58.246351 kubelet[1413]: I0702 07:48:58.245808 1413 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 07:48:58.452026 kernel: Initializing XFRM netlink socket Jul 2 07:48:58.982081 kubelet[1413]: I0702 07:48:58.982019 1413 topology_manager.go:215] "Topology Admit Handler" podUID="540bf655-63ff-4c0c-8b19-f1c04da55245" podNamespace="default" podName="nginx-deployment-6d5f899847-454hm" Jul 2 07:48:58.986799 systemd[1]: Created slice kubepods-besteffort-pod540bf655_63ff_4c0c_8b19_f1c04da55245.slice. Jul 2 07:48:59.065696 kubelet[1413]: E0702 07:48:59.065667 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:48:59.078371 kubelet[1413]: I0702 07:48:59.078313 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzsjk\" (UniqueName: \"kubernetes.io/projected/540bf655-63ff-4c0c-8b19-f1c04da55245-kube-api-access-hzsjk\") pod \"nginx-deployment-6d5f899847-454hm\" (UID: \"540bf655-63ff-4c0c-8b19-f1c04da55245\") " pod="default/nginx-deployment-6d5f899847-454hm" Jul 2 07:48:59.078874 kubelet[1413]: I0702 07:48:59.078849 1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qg82r" podStartSLOduration=10.05860786 podCreationTimestamp="2024-07-02 07:48:41 +0000 UTC" firstStartedPulling="2024-07-02 07:48:45.45742639 +0000 UTC m=+5.470694884" lastFinishedPulling="2024-07-02 07:48:53.477631797 +0000 UTC m=+13.490900282" observedRunningTime="2024-07-02 07:48:59.078564114 +0000 UTC m=+19.091832609" watchObservedRunningTime="2024-07-02 07:48:59.078813258 +0000 UTC m=+19.092081752" Jul 2 07:48:59.244430 kubelet[1413]: E0702 07:48:59.244263 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:59.289780 env[1193]: time="2024-07-02T07:48:59.289718531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-454hm,Uid:540bf655-63ff-4c0c-8b19-f1c04da55245,Namespace:default,Attempt:0,}" Jul 2 07:49:00.067843 kubelet[1413]: E0702 07:49:00.067800 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:00.071177 systemd-networkd[1031]: cilium_host: Link UP Jul 2 07:49:00.071313 systemd-networkd[1031]: cilium_net: Link UP Jul 2 07:49:00.071316 systemd-networkd[1031]: cilium_net: Gained carrier Jul 2 07:49:00.071446 systemd-networkd[1031]: cilium_host: Gained carrier Jul 2 07:49:00.073792 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:49:00.072524 systemd-networkd[1031]: cilium_host: Gained IPv6LL Jul 2 07:49:00.131155 systemd-networkd[1031]: cilium_net: Gained IPv6LL Jul 2 07:49:00.145873 systemd-networkd[1031]: cilium_vxlan: Link UP Jul 2 07:49:00.145883 systemd-networkd[1031]: cilium_vxlan: Gained carrier Jul 2 07:49:00.234451 kubelet[1413]: E0702 07:49:00.234378 1413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:00.244737 kubelet[1413]: E0702 07:49:00.244695 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:00.338021 kernel: NET: Registered PF_ALG protocol family Jul 2 07:49:00.857209 systemd-networkd[1031]: lxc_health: Link UP Jul 2 07:49:00.872353 systemd-networkd[1031]: lxc_health: Gained carrier Jul 2 07:49:00.873017 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:49:01.069362 kubelet[1413]: E0702 07:49:01.069323 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:01.245708 kubelet[1413]: E0702 07:49:01.245545 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:01.319932 systemd-networkd[1031]: lxc7563f09816fd: Link UP Jul 2 07:49:01.327012 kernel: eth0: renamed from tmpc9868 Jul 2 07:49:01.334057 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:49:01.334108 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7563f09816fd: link becomes ready Jul 2 07:49:01.334171 systemd-networkd[1031]: lxc7563f09816fd: Gained carrier Jul 2 07:49:01.939939 systemd-networkd[1031]: lxc_health: Gained IPv6LL Jul 2 07:49:02.129315 systemd-networkd[1031]: cilium_vxlan: Gained IPv6LL Jul 2 07:49:02.246091 kubelet[1413]: E0702 07:49:02.245933 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:02.570023 kubelet[1413]: E0702 07:49:02.569877 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:02.705175 systemd-networkd[1031]: lxc7563f09816fd: Gained IPv6LL Jul 2 07:49:03.246616 kubelet[1413]: E0702 07:49:03.246541 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:04.247049 kubelet[1413]: E0702 07:49:04.246963 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:04.725696 env[1193]: time="2024-07-02T07:49:04.725609400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:49:04.725696 env[1193]: time="2024-07-02T07:49:04.725643578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:49:04.725696 env[1193]: time="2024-07-02T07:49:04.725653466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:49:04.726150 env[1193]: time="2024-07-02T07:49:04.725761016Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c986815ea1792bffd54f09fc6b461e1a3c9885a00a8e1fe25adde448c47067c8 pid=2472 runtime=io.containerd.runc.v2 Jul 2 07:49:04.739452 systemd[1]: Started cri-containerd-c986815ea1792bffd54f09fc6b461e1a3c9885a00a8e1fe25adde448c47067c8.scope. Jul 2 07:49:04.749572 systemd-resolved[1144]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:49:04.769683 env[1193]: time="2024-07-02T07:49:04.769631334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-454hm,Uid:540bf655-63ff-4c0c-8b19-f1c04da55245,Namespace:default,Attempt:0,} returns sandbox id \"c986815ea1792bffd54f09fc6b461e1a3c9885a00a8e1fe25adde448c47067c8\"" Jul 2 07:49:04.771414 env[1193]: time="2024-07-02T07:49:04.771379586Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 07:49:05.247842 kubelet[1413]: E0702 07:49:05.247761 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:06.248377 kubelet[1413]: E0702 07:49:06.248321 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:07.249525 kubelet[1413]: E0702 07:49:07.249451 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:08.249792 kubelet[1413]: E0702 07:49:08.249730 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:08.531453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983334486.mount: Deactivated successfully. Jul 2 07:49:09.250916 kubelet[1413]: E0702 07:49:09.250846 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:10.252016 kubelet[1413]: E0702 07:49:10.251935 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:10.602923 env[1193]: time="2024-07-02T07:49:10.602868418Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:10.604752 env[1193]: time="2024-07-02T07:49:10.604695625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:10.606425 env[1193]: time="2024-07-02T07:49:10.606379911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:10.608216 env[1193]: time="2024-07-02T07:49:10.608185363Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:10.609629 env[1193]: time="2024-07-02T07:49:10.609579336Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 07:49:10.610909 env[1193]: time="2024-07-02T07:49:10.610882545Z" level=info msg="CreateContainer within sandbox \"c986815ea1792bffd54f09fc6b461e1a3c9885a00a8e1fe25adde448c47067c8\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 2 07:49:10.623408 env[1193]: time="2024-07-02T07:49:10.623351221Z" level=info msg="CreateContainer within sandbox \"c986815ea1792bffd54f09fc6b461e1a3c9885a00a8e1fe25adde448c47067c8\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e1a2c038db39911e7dd0a50bddd0cf435fb30edecaaeee713389bca57835b02e\"" Jul 2 07:49:10.623796 env[1193]: time="2024-07-02T07:49:10.623771500Z" level=info msg="StartContainer for \"e1a2c038db39911e7dd0a50bddd0cf435fb30edecaaeee713389bca57835b02e\"" Jul 2 07:49:10.639854 systemd[1]: Started cri-containerd-e1a2c038db39911e7dd0a50bddd0cf435fb30edecaaeee713389bca57835b02e.scope. Jul 2 07:49:10.664136 env[1193]: time="2024-07-02T07:49:10.664080432Z" level=info msg="StartContainer for \"e1a2c038db39911e7dd0a50bddd0cf435fb30edecaaeee713389bca57835b02e\" returns successfully" Jul 2 07:49:11.091189 kubelet[1413]: I0702 07:49:11.091069 1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-454hm" podStartSLOduration=7.2520775109999995 podCreationTimestamp="2024-07-02 07:48:58 +0000 UTC" firstStartedPulling="2024-07-02 07:49:04.77088035 +0000 UTC m=+24.784148834" lastFinishedPulling="2024-07-02 07:49:10.609835731 +0000 UTC m=+30.623104225" observedRunningTime="2024-07-02 07:49:11.090964707 +0000 UTC m=+31.104233201" watchObservedRunningTime="2024-07-02 07:49:11.091032902 +0000 UTC m=+31.104301396" Jul 2 07:49:11.252920 kubelet[1413]: E0702 07:49:11.252861 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:12.253163 kubelet[1413]: E0702 07:49:12.253122 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:13.254113 kubelet[1413]: E0702 07:49:13.254041 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:13.289541 kubelet[1413]: I0702 07:49:13.289499 1413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:49:13.290213 kubelet[1413]: E0702 07:49:13.290190 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:14.089829 kubelet[1413]: E0702 07:49:14.089790 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:14.254170 kubelet[1413]: E0702 07:49:14.254114 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:15.255180 kubelet[1413]: E0702 07:49:15.255122 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:15.588055 kubelet[1413]: I0702 07:49:15.587890 1413 topology_manager.go:215] "Topology Admit Handler" podUID="75f719fd-ad85-40f7-bae8-beb268550a50" podNamespace="default" podName="nfs-server-provisioner-0" Jul 2 07:49:15.593389 systemd[1]: Created slice kubepods-besteffort-pod75f719fd_ad85_40f7_bae8_beb268550a50.slice. Jul 2 07:49:15.665085 kubelet[1413]: I0702 07:49:15.665031 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99n2j\" (UniqueName: \"kubernetes.io/projected/75f719fd-ad85-40f7-bae8-beb268550a50-kube-api-access-99n2j\") pod \"nfs-server-provisioner-0\" (UID: \"75f719fd-ad85-40f7-bae8-beb268550a50\") " pod="default/nfs-server-provisioner-0" Jul 2 07:49:15.665085 kubelet[1413]: I0702 07:49:15.665085 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/75f719fd-ad85-40f7-bae8-beb268550a50-data\") pod \"nfs-server-provisioner-0\" (UID: \"75f719fd-ad85-40f7-bae8-beb268550a50\") " pod="default/nfs-server-provisioner-0" Jul 2 07:49:15.896202 env[1193]: time="2024-07-02T07:49:15.896149370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:75f719fd-ad85-40f7-bae8-beb268550a50,Namespace:default,Attempt:0,}" Jul 2 07:49:15.925387 systemd-networkd[1031]: lxc4d38dbba45b1: Link UP Jul 2 07:49:15.932128 kernel: eth0: renamed from tmpad88a Jul 2 07:49:15.942616 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:49:15.942684 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4d38dbba45b1: link becomes ready Jul 2 07:49:15.942630 systemd-networkd[1031]: lxc4d38dbba45b1: Gained carrier Jul 2 07:49:16.154569 env[1193]: time="2024-07-02T07:49:16.154391373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:49:16.154569 env[1193]: time="2024-07-02T07:49:16.154450174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:49:16.154569 env[1193]: time="2024-07-02T07:49:16.154460268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:49:16.154953 env[1193]: time="2024-07-02T07:49:16.154910699Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad88ad3f0ba7eaa1580252bfca44c92f7fd3e9a1ea80af5d5a4a2b034cf1b653 pid=2600 runtime=io.containerd.runc.v2 Jul 2 07:49:16.171108 systemd[1]: Started cri-containerd-ad88ad3f0ba7eaa1580252bfca44c92f7fd3e9a1ea80af5d5a4a2b034cf1b653.scope. Jul 2 07:49:16.184498 systemd-resolved[1144]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:49:16.206092 env[1193]: time="2024-07-02T07:49:16.206043323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:75f719fd-ad85-40f7-bae8-beb268550a50,Namespace:default,Attempt:0,} returns sandbox id \"ad88ad3f0ba7eaa1580252bfca44c92f7fd3e9a1ea80af5d5a4a2b034cf1b653\"" Jul 2 07:49:16.207439 env[1193]: time="2024-07-02T07:49:16.207387356Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 2 07:49:16.255771 kubelet[1413]: E0702 07:49:16.255717 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:17.076454 update_engine[1186]: I0702 07:49:17.076396 1186 update_attempter.cc:509] Updating boot flags... Jul 2 07:49:17.256178 kubelet[1413]: E0702 07:49:17.256129 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:18.001189 systemd-networkd[1031]: lxc4d38dbba45b1: Gained IPv6LL Jul 2 07:49:18.256912 kubelet[1413]: E0702 07:49:18.256798 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:18.864204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890449930.mount: Deactivated successfully. Jul 2 07:49:19.257289 kubelet[1413]: E0702 07:49:19.257146 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:20.234054 kubelet[1413]: E0702 07:49:20.233970 1413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:20.258230 kubelet[1413]: E0702 07:49:20.258201 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:21.258783 kubelet[1413]: E0702 07:49:21.258704 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:21.433327 env[1193]: time="2024-07-02T07:49:21.433270112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:21.435168 env[1193]: time="2024-07-02T07:49:21.435130623Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:21.436655 env[1193]: time="2024-07-02T07:49:21.436607948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:21.438294 env[1193]: time="2024-07-02T07:49:21.438217045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:21.439183 env[1193]: time="2024-07-02T07:49:21.439140555Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 2 07:49:21.441025 env[1193]: time="2024-07-02T07:49:21.440972180Z" level=info msg="CreateContainer within sandbox \"ad88ad3f0ba7eaa1580252bfca44c92f7fd3e9a1ea80af5d5a4a2b034cf1b653\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 2 07:49:21.457619 env[1193]: time="2024-07-02T07:49:21.457563137Z" level=info msg="CreateContainer within sandbox \"ad88ad3f0ba7eaa1580252bfca44c92f7fd3e9a1ea80af5d5a4a2b034cf1b653\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a04203f9bdb564dc6de20d5f47e61cf03a104eb1defcfb28809f857987e70d4d\"" Jul 2 07:49:21.458164 env[1193]: time="2024-07-02T07:49:21.458134963Z" level=info msg="StartContainer for \"a04203f9bdb564dc6de20d5f47e61cf03a104eb1defcfb28809f857987e70d4d\"" Jul 2 07:49:21.476662 systemd[1]: Started cri-containerd-a04203f9bdb564dc6de20d5f47e61cf03a104eb1defcfb28809f857987e70d4d.scope. Jul 2 07:49:21.499970 env[1193]: time="2024-07-02T07:49:21.499920053Z" level=info msg="StartContainer for \"a04203f9bdb564dc6de20d5f47e61cf03a104eb1defcfb28809f857987e70d4d\" returns successfully" Jul 2 07:49:22.114150 kubelet[1413]: I0702 07:49:22.114084 1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.881780146 podCreationTimestamp="2024-07-02 07:49:15 +0000 UTC" firstStartedPulling="2024-07-02 07:49:16.207144644 +0000 UTC m=+36.220413139" lastFinishedPulling="2024-07-02 07:49:21.43939655 +0000 UTC m=+41.452665044" observedRunningTime="2024-07-02 07:49:22.113882551 +0000 UTC m=+42.127151056" watchObservedRunningTime="2024-07-02 07:49:22.114032051 +0000 UTC m=+42.127300555" Jul 2 07:49:22.259667 kubelet[1413]: E0702 07:49:22.259628 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:23.260413 kubelet[1413]: E0702 07:49:23.260365 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:24.260497 kubelet[1413]: E0702 07:49:24.260461 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:25.261105 kubelet[1413]: E0702 07:49:25.261032 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:26.261933 kubelet[1413]: E0702 07:49:26.261843 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:27.262639 kubelet[1413]: E0702 07:49:27.262565 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:28.262980 kubelet[1413]: E0702 07:49:28.262895 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:29.264010 kubelet[1413]: E0702 07:49:29.263950 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:30.264860 kubelet[1413]: E0702 07:49:30.264809 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:31.265717 kubelet[1413]: E0702 07:49:31.265670 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:31.731002 kubelet[1413]: I0702 07:49:31.730952 1413 topology_manager.go:215] "Topology Admit Handler" podUID="28363915-3922-4f73-950f-14f07100be0c" podNamespace="default" podName="test-pod-1" Jul 2 07:49:31.735400 systemd[1]: Created slice kubepods-besteffort-pod28363915_3922_4f73_950f_14f07100be0c.slice. Jul 2 07:49:31.854292 kubelet[1413]: I0702 07:49:31.854251 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d47c8271-d43d-4bb7-a464-c182f7ae069e\" (UniqueName: \"kubernetes.io/nfs/28363915-3922-4f73-950f-14f07100be0c-pvc-d47c8271-d43d-4bb7-a464-c182f7ae069e\") pod \"test-pod-1\" (UID: \"28363915-3922-4f73-950f-14f07100be0c\") " pod="default/test-pod-1" Jul 2 07:49:31.854292 kubelet[1413]: I0702 07:49:31.854302 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2pt\" (UniqueName: \"kubernetes.io/projected/28363915-3922-4f73-950f-14f07100be0c-kube-api-access-5z2pt\") pod \"test-pod-1\" (UID: \"28363915-3922-4f73-950f-14f07100be0c\") " pod="default/test-pod-1" Jul 2 07:49:31.974032 kernel: FS-Cache: Loaded Jul 2 07:49:32.026541 kernel: RPC: Registered named UNIX socket transport module. Jul 2 07:49:32.026602 kernel: RPC: Registered udp transport module. Jul 2 07:49:32.026628 kernel: RPC: Registered tcp transport module. Jul 2 07:49:32.027593 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 2 07:49:32.089027 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 2 07:49:32.261626 kernel: NFS: Registering the id_resolver key type Jul 2 07:49:32.261784 kernel: Key type id_resolver registered Jul 2 07:49:32.261807 kernel: Key type id_legacy registered Jul 2 07:49:32.265866 kubelet[1413]: E0702 07:49:32.265823 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:32.286345 nfsidmap[2732]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 2 07:49:32.289551 nfsidmap[2735]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 2 07:49:32.337801 env[1193]: time="2024-07-02T07:49:32.337750461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:28363915-3922-4f73-950f-14f07100be0c,Namespace:default,Attempt:0,}" Jul 2 07:49:32.365377 systemd-networkd[1031]: lxc531c1f10c575: Link UP Jul 2 07:49:32.372105 kernel: eth0: renamed from tmpa6e74 Jul 2 07:49:32.380791 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:49:32.380883 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc531c1f10c575: link becomes ready Jul 2 07:49:32.380973 systemd-networkd[1031]: lxc531c1f10c575: Gained carrier Jul 2 07:49:32.566664 env[1193]: time="2024-07-02T07:49:32.566511682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:49:32.566664 env[1193]: time="2024-07-02T07:49:32.566545232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:49:32.566664 env[1193]: time="2024-07-02T07:49:32.566554934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:49:32.566835 env[1193]: time="2024-07-02T07:49:32.566691064Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6e74fab263e151cd6a0304bd2a1ca5da20f68052d6f54f82a932fe31f993f54 pid=2769 runtime=io.containerd.runc.v2 Jul 2 07:49:32.576695 systemd[1]: Started cri-containerd-a6e74fab263e151cd6a0304bd2a1ca5da20f68052d6f54f82a932fe31f993f54.scope. Jul 2 07:49:32.587423 systemd-resolved[1144]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:49:32.607171 env[1193]: time="2024-07-02T07:49:32.607128443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:28363915-3922-4f73-950f-14f07100be0c,Namespace:default,Attempt:0,} returns sandbox id \"a6e74fab263e151cd6a0304bd2a1ca5da20f68052d6f54f82a932fe31f993f54\"" Jul 2 07:49:32.608850 env[1193]: time="2024-07-02T07:49:32.608822879Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 07:49:32.975790 env[1193]: time="2024-07-02T07:49:32.975706309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:32.978452 env[1193]: time="2024-07-02T07:49:32.978363919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:32.980380 env[1193]: time="2024-07-02T07:49:32.980351810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:32.982049 env[1193]: time="2024-07-02T07:49:32.982014808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:32.982728 env[1193]: time="2024-07-02T07:49:32.982690870Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 07:49:32.984455 env[1193]: time="2024-07-02T07:49:32.984426103Z" level=info msg="CreateContainer within sandbox \"a6e74fab263e151cd6a0304bd2a1ca5da20f68052d6f54f82a932fe31f993f54\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 2 07:49:33.002191 env[1193]: time="2024-07-02T07:49:33.002103225Z" level=info msg="CreateContainer within sandbox \"a6e74fab263e151cd6a0304bd2a1ca5da20f68052d6f54f82a932fe31f993f54\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"750b2e1f7f5cb174f1d5dce306e0021a153c886b9081dabf233eba81fc01bd4c\"" Jul 2 07:49:33.002798 env[1193]: time="2024-07-02T07:49:33.002752413Z" level=info msg="StartContainer for \"750b2e1f7f5cb174f1d5dce306e0021a153c886b9081dabf233eba81fc01bd4c\"" Jul 2 07:49:33.021000 systemd[1]: Started cri-containerd-750b2e1f7f5cb174f1d5dce306e0021a153c886b9081dabf233eba81fc01bd4c.scope. Jul 2 07:49:33.050131 env[1193]: time="2024-07-02T07:49:33.050069036Z" level=info msg="StartContainer for \"750b2e1f7f5cb174f1d5dce306e0021a153c886b9081dabf233eba81fc01bd4c\" returns successfully" Jul 2 07:49:33.267115 kubelet[1413]: E0702 07:49:33.266927 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:33.873150 systemd-networkd[1031]: lxc531c1f10c575: Gained IPv6LL Jul 2 07:49:33.964147 systemd[1]: run-containerd-runc-k8s.io-750b2e1f7f5cb174f1d5dce306e0021a153c886b9081dabf233eba81fc01bd4c-runc.HCIGM2.mount: Deactivated successfully. Jul 2 07:49:34.267293 kubelet[1413]: E0702 07:49:34.267179 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:35.267889 kubelet[1413]: E0702 07:49:35.267847 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:36.268732 kubelet[1413]: E0702 07:49:36.268670 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:37.270043 kubelet[1413]: E0702 07:49:37.269852 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:38.270981 kubelet[1413]: E0702 07:49:38.270933 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:38.475380 kubelet[1413]: I0702 07:49:38.475332 1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=23.100713395 podCreationTimestamp="2024-07-02 07:49:15 +0000 UTC" firstStartedPulling="2024-07-02 07:49:32.608448029 +0000 UTC m=+52.621716523" lastFinishedPulling="2024-07-02 07:49:32.983022518 +0000 UTC m=+52.996291002" observedRunningTime="2024-07-02 07:49:33.132553103 +0000 UTC m=+53.145821587" watchObservedRunningTime="2024-07-02 07:49:38.475287874 +0000 UTC m=+58.488556368" Jul 2 07:49:38.493187 systemd[1]: run-containerd-runc-k8s.io-83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312-runc.SNa1NG.mount: Deactivated successfully. Jul 2 07:49:38.509930 env[1193]: time="2024-07-02T07:49:38.509859975Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:49:38.515560 env[1193]: time="2024-07-02T07:49:38.515517239Z" level=info msg="StopContainer for \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\" with timeout 2 (s)" Jul 2 07:49:38.515790 env[1193]: time="2024-07-02T07:49:38.515759253Z" level=info msg="Stop container \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\" with signal terminated" Jul 2 07:49:38.521714 systemd-networkd[1031]: lxc_health: Link DOWN Jul 2 07:49:38.521725 systemd-networkd[1031]: lxc_health: Lost carrier Jul 2 07:49:38.550376 systemd[1]: cri-containerd-83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312.scope: Deactivated successfully. Jul 2 07:49:38.550693 systemd[1]: cri-containerd-83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312.scope: Consumed 6.294s CPU time. Jul 2 07:49:38.569803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312-rootfs.mount: Deactivated successfully. Jul 2 07:49:38.579198 env[1193]: time="2024-07-02T07:49:38.579152837Z" level=info msg="shim disconnected" id=83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312 Jul 2 07:49:38.579198 env[1193]: time="2024-07-02T07:49:38.579198501Z" level=warning msg="cleaning up after shim disconnected" id=83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312 namespace=k8s.io Jul 2 07:49:38.579433 env[1193]: time="2024-07-02T07:49:38.579207831Z" level=info msg="cleaning up dead shim" Jul 2 07:49:38.586163 env[1193]: time="2024-07-02T07:49:38.586121927Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2901 runtime=io.containerd.runc.v2\n" Jul 2 07:49:38.589316 env[1193]: time="2024-07-02T07:49:38.589284716Z" level=info msg="StopContainer for \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\" returns successfully" Jul 2 07:49:38.589973 env[1193]: time="2024-07-02T07:49:38.589947639Z" level=info msg="StopPodSandbox for \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\"" Jul 2 07:49:38.590070 env[1193]: time="2024-07-02T07:49:38.590046966Z" level=info msg="Container to stop \"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:38.590070 env[1193]: time="2024-07-02T07:49:38.590064272Z" level=info msg="Container to stop \"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:38.590121 env[1193]: time="2024-07-02T07:49:38.590074023Z" level=info msg="Container to stop \"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:38.590121 env[1193]: time="2024-07-02T07:49:38.590084745Z" level=info msg="Container to stop \"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:38.590121 env[1193]: time="2024-07-02T07:49:38.590093193Z" level=info msg="Container to stop \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:38.592108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7-shm.mount: Deactivated successfully. Jul 2 07:49:38.594655 systemd[1]: cri-containerd-a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7.scope: Deactivated successfully. Jul 2 07:49:38.614022 env[1193]: time="2024-07-02T07:49:38.613952832Z" level=info msg="shim disconnected" id=a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7 Jul 2 07:49:38.614223 env[1193]: time="2024-07-02T07:49:38.614025334Z" level=warning msg="cleaning up after shim disconnected" id=a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7 namespace=k8s.io Jul 2 07:49:38.614223 env[1193]: time="2024-07-02T07:49:38.614041117Z" level=info msg="cleaning up dead shim" Jul 2 07:49:38.620944 env[1193]: time="2024-07-02T07:49:38.620897071Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2932 runtime=io.containerd.runc.v2\n" Jul 2 07:49:38.621274 env[1193]: time="2024-07-02T07:49:38.621236098Z" level=info msg="TearDown network for sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" successfully" Jul 2 07:49:38.621274 env[1193]: time="2024-07-02T07:49:38.621262694Z" level=info msg="StopPodSandbox for \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" returns successfully" Jul 2 07:49:38.689999 kubelet[1413]: I0702 07:49:38.689932 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-lib-modules\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.689999 kubelet[1413]: I0702 07:49:38.690002 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-hostproc\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690220 kubelet[1413]: I0702 07:49:38.690021 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-cgroup\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690220 kubelet[1413]: I0702 07:49:38.690043 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe1067c4-70cd-465a-a499-32c10f41faf7-clustermesh-secrets\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690220 kubelet[1413]: I0702 07:49:38.690059 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-bpf-maps\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690220 kubelet[1413]: I0702 07:49:38.690072 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cni-path\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690220 kubelet[1413]: I0702 07:49:38.690086 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-xtables-lock\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690220 kubelet[1413]: I0702 07:49:38.690086 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.690378 kubelet[1413]: I0702 07:49:38.690120 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.690378 kubelet[1413]: I0702 07:49:38.690147 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.690378 kubelet[1413]: I0702 07:49:38.690122 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.690378 kubelet[1413]: I0702 07:49:38.690086 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.690378 kubelet[1413]: I0702 07:49:38.690098 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-etc-cni-netd\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690568 kubelet[1413]: I0702 07:49:38.690192 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-host-proc-sys-kernel\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690568 kubelet[1413]: I0702 07:49:38.690220 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-run\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690568 kubelet[1413]: I0702 07:49:38.690135 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cni-path" (OuterVolumeSpecName: "cni-path") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.690568 kubelet[1413]: I0702 07:49:38.690241 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-host-proc-sys-net\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690568 kubelet[1413]: I0702 07:49:38.690237 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.690766 kubelet[1413]: I0702 07:49:38.690254 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.690766 kubelet[1413]: I0702 07:49:38.690267 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q8hj\" (UniqueName: \"kubernetes.io/projected/fe1067c4-70cd-465a-a499-32c10f41faf7-kube-api-access-9q8hj\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690766 kubelet[1413]: I0702 07:49:38.690269 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.690766 kubelet[1413]: I0702 07:49:38.690291 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe1067c4-70cd-465a-a499-32c10f41faf7-hubble-tls\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690766 kubelet[1413]: I0702 07:49:38.690333 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-config-path\") pod \"fe1067c4-70cd-465a-a499-32c10f41faf7\" (UID: \"fe1067c4-70cd-465a-a499-32c10f41faf7\") " Jul 2 07:49:38.690766 kubelet[1413]: I0702 07:49:38.690369 1413 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-xtables-lock\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.690972 kubelet[1413]: I0702 07:49:38.690382 1413 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-etc-cni-netd\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.690972 kubelet[1413]: I0702 07:49:38.690395 1413 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-host-proc-sys-kernel\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.690972 kubelet[1413]: I0702 07:49:38.690408 1413 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-run\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.690972 kubelet[1413]: I0702 07:49:38.690420 1413 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-host-proc-sys-net\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.690972 kubelet[1413]: I0702 07:49:38.690431 1413 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-cgroup\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.690972 kubelet[1413]: I0702 07:49:38.690442 1413 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-bpf-maps\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.690972 kubelet[1413]: I0702 07:49:38.690452 1413 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-cni-path\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.690972 kubelet[1413]: I0702 07:49:38.690465 1413 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-lib-modules\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.691188 kubelet[1413]: I0702 07:49:38.690652 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-hostproc" (OuterVolumeSpecName: "hostproc") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:38.692979 kubelet[1413]: I0702 07:49:38.692894 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:49:38.692979 kubelet[1413]: I0702 07:49:38.692921 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1067c4-70cd-465a-a499-32c10f41faf7-kube-api-access-9q8hj" (OuterVolumeSpecName: "kube-api-access-9q8hj") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "kube-api-access-9q8hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:49:38.693373 kubelet[1413]: I0702 07:49:38.693335 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1067c4-70cd-465a-a499-32c10f41faf7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:49:38.694279 kubelet[1413]: I0702 07:49:38.694261 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1067c4-70cd-465a-a499-32c10f41faf7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fe1067c4-70cd-465a-a499-32c10f41faf7" (UID: "fe1067c4-70cd-465a-a499-32c10f41faf7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:49:38.791229 kubelet[1413]: I0702 07:49:38.791070 1413 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe1067c4-70cd-465a-a499-32c10f41faf7-cilium-config-path\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.791229 kubelet[1413]: I0702 07:49:38.791120 1413 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe1067c4-70cd-465a-a499-32c10f41faf7-clustermesh-secrets\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.791229 kubelet[1413]: I0702 07:49:38.791148 1413 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe1067c4-70cd-465a-a499-32c10f41faf7-hostproc\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.791229 kubelet[1413]: I0702 07:49:38.791166 1413 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9q8hj\" (UniqueName: \"kubernetes.io/projected/fe1067c4-70cd-465a-a499-32c10f41faf7-kube-api-access-9q8hj\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:38.791229 kubelet[1413]: I0702 07:49:38.791178 1413 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe1067c4-70cd-465a-a499-32c10f41faf7-hubble-tls\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:39.024608 systemd[1]: Removed slice kubepods-burstable-podfe1067c4_70cd_465a_a499_32c10f41faf7.slice. Jul 2 07:49:39.024687 systemd[1]: kubepods-burstable-podfe1067c4_70cd_465a_a499_32c10f41faf7.slice: Consumed 6.509s CPU time. Jul 2 07:49:39.138308 kubelet[1413]: I0702 07:49:39.138285 1413 scope.go:117] "RemoveContainer" containerID="83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312" Jul 2 07:49:39.164970 env[1193]: time="2024-07-02T07:49:39.164894169Z" level=info msg="RemoveContainer for \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\"" Jul 2 07:49:39.271805 kubelet[1413]: E0702 07:49:39.271742 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:39.440017 env[1193]: time="2024-07-02T07:49:39.439808275Z" level=info msg="RemoveContainer for \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\" returns successfully" Jul 2 07:49:39.440261 kubelet[1413]: I0702 07:49:39.440201 1413 scope.go:117] "RemoveContainer" containerID="18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590" Jul 2 07:49:39.441602 env[1193]: time="2024-07-02T07:49:39.441565000Z" level=info msg="RemoveContainer for \"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590\"" Jul 2 07:49:39.487716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7-rootfs.mount: Deactivated successfully. Jul 2 07:49:39.487845 systemd[1]: var-lib-kubelet-pods-fe1067c4\x2d70cd\x2d465a\x2da499\x2d32c10f41faf7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9q8hj.mount: Deactivated successfully. Jul 2 07:49:39.487909 systemd[1]: var-lib-kubelet-pods-fe1067c4\x2d70cd\x2d465a\x2da499\x2d32c10f41faf7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:49:39.487960 systemd[1]: var-lib-kubelet-pods-fe1067c4\x2d70cd\x2d465a\x2da499\x2d32c10f41faf7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:49:39.596157 env[1193]: time="2024-07-02T07:49:39.596087954Z" level=info msg="RemoveContainer for \"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590\" returns successfully" Jul 2 07:49:39.596750 kubelet[1413]: I0702 07:49:39.596708 1413 scope.go:117] "RemoveContainer" containerID="89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8" Jul 2 07:49:39.598286 env[1193]: time="2024-07-02T07:49:39.598240602Z" level=info msg="RemoveContainer for \"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8\"" Jul 2 07:49:39.766216 env[1193]: time="2024-07-02T07:49:39.766049253Z" level=info msg="RemoveContainer for \"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8\" returns successfully" Jul 2 07:49:39.766425 kubelet[1413]: I0702 07:49:39.766386 1413 scope.go:117] "RemoveContainer" containerID="743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3" Jul 2 07:49:39.768063 env[1193]: time="2024-07-02T07:49:39.768022618Z" level=info msg="RemoveContainer for \"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3\"" Jul 2 07:49:39.806312 env[1193]: time="2024-07-02T07:49:39.806241860Z" level=info msg="RemoveContainer for \"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3\" returns successfully" Jul 2 07:49:39.806663 kubelet[1413]: I0702 07:49:39.806615 1413 scope.go:117] "RemoveContainer" containerID="07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36" Jul 2 07:49:39.808123 env[1193]: time="2024-07-02T07:49:39.808093321Z" level=info msg="RemoveContainer for \"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36\"" Jul 2 07:49:39.921007 env[1193]: time="2024-07-02T07:49:39.920921763Z" level=info msg="RemoveContainer for \"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36\" returns successfully" Jul 2 07:49:39.921282 kubelet[1413]: I0702 07:49:39.921248 1413 scope.go:117] "RemoveContainer" containerID="83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312" Jul 2 07:49:39.921715 env[1193]: time="2024-07-02T07:49:39.921622912Z" level=error msg="ContainerStatus for \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\": not found" Jul 2 07:49:39.922443 kubelet[1413]: E0702 07:49:39.922415 1413 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\": not found" containerID="83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312" Jul 2 07:49:39.922552 kubelet[1413]: I0702 07:49:39.922529 1413 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312"} err="failed to get container status \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\": rpc error: code = NotFound desc = an error occurred when try to find container \"83536c964eb252be3eda67485b726284d67379e523614d3a685cd4b5d1930312\": not found" Jul 2 07:49:39.922552 kubelet[1413]: I0702 07:49:39.922550 1413 scope.go:117] "RemoveContainer" containerID="18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590" Jul 2 07:49:39.922809 env[1193]: time="2024-07-02T07:49:39.922763655Z" level=error msg="ContainerStatus for \"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590\": not found" Jul 2 07:49:39.923005 kubelet[1413]: E0702 07:49:39.922967 1413 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590\": not found" containerID="18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590" Jul 2 07:49:39.923062 kubelet[1413]: I0702 07:49:39.923023 1413 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590"} err="failed to get container status \"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590\": rpc error: code = NotFound desc = an error occurred when try to find container \"18e0a569ba9bf1d3850259f619b1a9029a55e773cd1f0f1ef6064d251e6e6590\": not found" Jul 2 07:49:39.923062 kubelet[1413]: I0702 07:49:39.923035 1413 scope.go:117] "RemoveContainer" containerID="89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8" Jul 2 07:49:39.923285 env[1193]: time="2024-07-02T07:49:39.923237811Z" level=error msg="ContainerStatus for \"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8\": not found" Jul 2 07:49:39.923389 kubelet[1413]: E0702 07:49:39.923371 1413 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8\": not found" containerID="89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8" Jul 2 07:49:39.923468 kubelet[1413]: I0702 07:49:39.923408 1413 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8"} err="failed to get container status \"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"89209068735621cba493af3246c6307c567aeb1c7eb549fdfe4b21d309e9d5f8\": not found" Jul 2 07:49:39.923468 kubelet[1413]: I0702 07:49:39.923418 1413 scope.go:117] "RemoveContainer" containerID="743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3" Jul 2 07:49:39.923607 env[1193]: time="2024-07-02T07:49:39.923563979Z" level=error msg="ContainerStatus for \"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3\": not found" Jul 2 07:49:39.923793 kubelet[1413]: E0702 07:49:39.923765 1413 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3\": not found" containerID="743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3" Jul 2 07:49:39.923840 kubelet[1413]: I0702 07:49:39.923814 1413 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3"} err="failed to get container status \"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3\": rpc error: code = NotFound desc = an error occurred when try to find container \"743622e36a329fbc378cfac5ffae8534891f308aaa2427f2a4c23d385d37bfa3\": not found" Jul 2 07:49:39.923840 kubelet[1413]: I0702 07:49:39.923830 1413 scope.go:117] "RemoveContainer" containerID="07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36" Jul 2 07:49:39.924120 env[1193]: time="2024-07-02T07:49:39.924058919Z" level=error msg="ContainerStatus for \"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36\": not found" Jul 2 07:49:39.924234 kubelet[1413]: E0702 07:49:39.924222 1413 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36\": not found" containerID="07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36" Jul 2 07:49:39.924262 kubelet[1413]: I0702 07:49:39.924256 1413 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36"} err="failed to get container status \"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36\": rpc error: code = NotFound desc = an error occurred when try to find container \"07574fc786a5f43f07284b0b6329bf41ea024394e61d23180201ee09eed88e36\": not found" Jul 2 07:49:40.234291 kubelet[1413]: E0702 07:49:40.234244 1413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:40.256795 env[1193]: time="2024-07-02T07:49:40.256757788Z" level=info msg="StopPodSandbox for \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\"" Jul 2 07:49:40.256908 env[1193]: time="2024-07-02T07:49:40.256868598Z" level=info msg="TearDown network for sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" successfully" Jul 2 07:49:40.256947 env[1193]: time="2024-07-02T07:49:40.256906847Z" level=info msg="StopPodSandbox for \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" returns successfully" Jul 2 07:49:40.257245 env[1193]: time="2024-07-02T07:49:40.257219185Z" level=info msg="RemovePodSandbox for \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\"" Jul 2 07:49:40.257319 env[1193]: time="2024-07-02T07:49:40.257244478Z" level=info msg="Forcibly stopping sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\"" Jul 2 07:49:40.257319 env[1193]: time="2024-07-02T07:49:40.257291916Z" level=info msg="TearDown network for sandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" successfully" Jul 2 07:49:40.272560 kubelet[1413]: E0702 07:49:40.272536 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:40.334228 env[1193]: time="2024-07-02T07:49:40.334161690Z" level=info msg="RemovePodSandbox \"a894471ef7102524337232e2d2586842f5332f854da67e8772eb24a6553a6cb7\" returns successfully" Jul 2 07:49:40.419219 kubelet[1413]: I0702 07:49:40.419155 1413 topology_manager.go:215] "Topology Admit Handler" podUID="6999568a-a54a-4ac5-9c5c-56ee6341446a" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-9xdg8" Jul 2 07:49:40.419219 kubelet[1413]: E0702 07:49:40.419224 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe1067c4-70cd-465a-a499-32c10f41faf7" containerName="apply-sysctl-overwrites" Jul 2 07:49:40.419219 kubelet[1413]: E0702 07:49:40.419237 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe1067c4-70cd-465a-a499-32c10f41faf7" containerName="mount-bpf-fs" Jul 2 07:49:40.419219 kubelet[1413]: E0702 07:49:40.419244 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe1067c4-70cd-465a-a499-32c10f41faf7" containerName="mount-cgroup" Jul 2 07:49:40.419486 kubelet[1413]: E0702 07:49:40.419251 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe1067c4-70cd-465a-a499-32c10f41faf7" containerName="clean-cilium-state" Jul 2 07:49:40.419486 kubelet[1413]: E0702 07:49:40.419270 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe1067c4-70cd-465a-a499-32c10f41faf7" containerName="cilium-agent" Jul 2 07:49:40.419486 kubelet[1413]: I0702 07:49:40.419289 1413 memory_manager.go:346] "RemoveStaleState removing state" podUID="fe1067c4-70cd-465a-a499-32c10f41faf7" containerName="cilium-agent" Jul 2 07:49:40.424191 systemd[1]: Created slice kubepods-besteffort-pod6999568a_a54a_4ac5_9c5c_56ee6341446a.slice. Jul 2 07:49:40.444363 kubelet[1413]: I0702 07:49:40.444308 1413 topology_manager.go:215] "Topology Admit Handler" podUID="4eb2eeca-d245-4f3e-860e-0052a2310f61" podNamespace="kube-system" podName="cilium-fn427" Jul 2 07:49:40.449131 systemd[1]: Created slice kubepods-burstable-pod4eb2eeca_d245_4f3e_860e_0052a2310f61.slice. Jul 2 07:49:40.504905 kubelet[1413]: I0702 07:49:40.503940 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6999568a-a54a-4ac5-9c5c-56ee6341446a-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-9xdg8\" (UID: \"6999568a-a54a-4ac5-9c5c-56ee6341446a\") " pod="kube-system/cilium-operator-6bc8ccdb58-9xdg8" Jul 2 07:49:40.504905 kubelet[1413]: I0702 07:49:40.504013 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cni-path\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.504905 kubelet[1413]: I0702 07:49:40.504032 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-etc-cni-netd\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.504905 kubelet[1413]: I0702 07:49:40.504062 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-xtables-lock\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.504905 kubelet[1413]: I0702 07:49:40.504111 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-host-proc-sys-kernel\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.505298 kubelet[1413]: I0702 07:49:40.504152 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-lib-modules\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.505298 kubelet[1413]: I0702 07:49:40.504177 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-config-path\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.505298 kubelet[1413]: I0702 07:49:40.504203 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhr9n\" (UniqueName: \"kubernetes.io/projected/6999568a-a54a-4ac5-9c5c-56ee6341446a-kube-api-access-rhr9n\") pod \"cilium-operator-6bc8ccdb58-9xdg8\" (UID: \"6999568a-a54a-4ac5-9c5c-56ee6341446a\") " pod="kube-system/cilium-operator-6bc8ccdb58-9xdg8" Jul 2 07:49:40.505298 kubelet[1413]: I0702 07:49:40.504230 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-hostproc\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.505298 kubelet[1413]: I0702 07:49:40.504253 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-host-proc-sys-net\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.505529 kubelet[1413]: I0702 07:49:40.504297 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-run\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.505529 kubelet[1413]: I0702 07:49:40.504319 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-cgroup\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.505529 kubelet[1413]: I0702 07:49:40.504342 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-ipsec-secrets\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.505529 kubelet[1413]: I0702 07:49:40.504360 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4eb2eeca-d245-4f3e-860e-0052a2310f61-hubble-tls\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.505529 kubelet[1413]: I0702 07:49:40.504381 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-bpf-maps\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.505529 kubelet[1413]: I0702 07:49:40.504397 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4eb2eeca-d245-4f3e-860e-0052a2310f61-clustermesh-secrets\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.505848 kubelet[1413]: I0702 07:49:40.504415 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sx4t\" (UniqueName: \"kubernetes.io/projected/4eb2eeca-d245-4f3e-860e-0052a2310f61-kube-api-access-5sx4t\") pod \"cilium-fn427\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " pod="kube-system/cilium-fn427" Jul 2 07:49:40.726923 kubelet[1413]: E0702 07:49:40.726865 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:40.727546 env[1193]: time="2024-07-02T07:49:40.727492822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-9xdg8,Uid:6999568a-a54a-4ac5-9c5c-56ee6341446a,Namespace:kube-system,Attempt:0,}" Jul 2 07:49:40.761039 kubelet[1413]: E0702 07:49:40.760882 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:40.761567 env[1193]: time="2024-07-02T07:49:40.761465814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fn427,Uid:4eb2eeca-d245-4f3e-860e-0052a2310f61,Namespace:kube-system,Attempt:0,}" Jul 2 07:49:41.023032 kubelet[1413]: I0702 07:49:41.022893 1413 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fe1067c4-70cd-465a-a499-32c10f41faf7" path="/var/lib/kubelet/pods/fe1067c4-70cd-465a-a499-32c10f41faf7/volumes" Jul 2 07:49:41.047961 env[1193]: time="2024-07-02T07:49:41.047882857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:49:41.047961 env[1193]: time="2024-07-02T07:49:41.047924894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:49:41.047961 env[1193]: time="2024-07-02T07:49:41.047935726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:49:41.048246 env[1193]: time="2024-07-02T07:49:41.048106038Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de00c16c8d70267764a8bad4b2c0351ee88021940e2b7150f5c5c781b92fa3ee pid=2962 runtime=io.containerd.runc.v2 Jul 2 07:49:41.054877 env[1193]: time="2024-07-02T07:49:41.054790608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:49:41.054877 env[1193]: time="2024-07-02T07:49:41.054843818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:49:41.054877 env[1193]: time="2024-07-02T07:49:41.054856234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:49:41.055149 env[1193]: time="2024-07-02T07:49:41.055071349Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25 pid=2981 runtime=io.containerd.runc.v2 Jul 2 07:49:41.061195 systemd[1]: Started cri-containerd-de00c16c8d70267764a8bad4b2c0351ee88021940e2b7150f5c5c781b92fa3ee.scope. Jul 2 07:49:41.068561 systemd[1]: Started cri-containerd-f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25.scope. Jul 2 07:49:41.075672 kubelet[1413]: E0702 07:49:41.075643 1413 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:49:41.093911 env[1193]: time="2024-07-02T07:49:41.093867022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fn427,Uid:4eb2eeca-d245-4f3e-860e-0052a2310f61,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25\"" Jul 2 07:49:41.095486 kubelet[1413]: E0702 07:49:41.095464 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:41.097398 env[1193]: time="2024-07-02T07:49:41.097368496Z" level=info msg="CreateContainer within sandbox \"f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:49:41.099556 env[1193]: time="2024-07-02T07:49:41.099514325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-9xdg8,Uid:6999568a-a54a-4ac5-9c5c-56ee6341446a,Namespace:kube-system,Attempt:0,} returns sandbox id \"de00c16c8d70267764a8bad4b2c0351ee88021940e2b7150f5c5c781b92fa3ee\"" Jul 2 07:49:41.100042 kubelet[1413]: E0702 07:49:41.100024 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:41.100751 env[1193]: time="2024-07-02T07:49:41.100713898Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:49:41.115219 env[1193]: time="2024-07-02T07:49:41.115178843Z" level=info msg="CreateContainer within sandbox \"f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668\"" Jul 2 07:49:41.115809 env[1193]: time="2024-07-02T07:49:41.115773705Z" level=info msg="StartContainer for \"a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668\"" Jul 2 07:49:41.131339 systemd[1]: Started cri-containerd-a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668.scope. Jul 2 07:49:41.142208 systemd[1]: cri-containerd-a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668.scope: Deactivated successfully. Jul 2 07:49:41.161360 env[1193]: time="2024-07-02T07:49:41.161299643Z" level=info msg="shim disconnected" id=a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668 Jul 2 07:49:41.161360 env[1193]: time="2024-07-02T07:49:41.161352462Z" level=warning msg="cleaning up after shim disconnected" id=a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668 namespace=k8s.io Jul 2 07:49:41.161360 env[1193]: time="2024-07-02T07:49:41.161363825Z" level=info msg="cleaning up dead shim" Jul 2 07:49:41.169893 env[1193]: time="2024-07-02T07:49:41.169833277Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3060 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:49:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:49:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 07:49:41.170268 env[1193]: time="2024-07-02T07:49:41.170160314Z" level=error msg="copy shim log" error="read /proc/self/fd/65: file already closed" Jul 2 07:49:41.170420 env[1193]: time="2024-07-02T07:49:41.170375619Z" level=error msg="Failed to pipe stdout of container \"a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668\"" error="reading from a closed fifo" Jul 2 07:49:41.170550 env[1193]: time="2024-07-02T07:49:41.170408227Z" level=error msg="Failed to pipe stderr of container \"a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668\"" error="reading from a closed fifo" Jul 2 07:49:41.172857 env[1193]: time="2024-07-02T07:49:41.172784402Z" level=error msg="StartContainer for \"a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 07:49:41.173197 kubelet[1413]: E0702 07:49:41.173154 1413 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668" Jul 2 07:49:41.173540 kubelet[1413]: E0702 07:49:41.173319 1413 kuberuntime_manager.go:1261] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 07:49:41.173540 kubelet[1413]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 07:49:41.173540 kubelet[1413]: rm /hostbin/cilium-mount Jul 2 07:49:41.173651 kubelet[1413]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5sx4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-fn427_kube-system(4eb2eeca-d245-4f3e-860e-0052a2310f61): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 07:49:41.173651 kubelet[1413]: E0702 07:49:41.173370 1413 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-fn427" podUID="4eb2eeca-d245-4f3e-860e-0052a2310f61" Jul 2 07:49:41.273283 kubelet[1413]: E0702 07:49:41.273151 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:42.148877 env[1193]: time="2024-07-02T07:49:42.148820065Z" level=info msg="StopPodSandbox for \"f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25\"" Jul 2 07:49:42.149489 env[1193]: time="2024-07-02T07:49:42.148905421Z" level=info msg="Container to stop \"a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:49:42.151107 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25-shm.mount: Deactivated successfully. Jul 2 07:49:42.155335 systemd[1]: cri-containerd-f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25.scope: Deactivated successfully. Jul 2 07:49:42.175995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25-rootfs.mount: Deactivated successfully. Jul 2 07:49:42.219199 env[1193]: time="2024-07-02T07:49:42.219117824Z" level=info msg="shim disconnected" id=f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25 Jul 2 07:49:42.219199 env[1193]: time="2024-07-02T07:49:42.219176384Z" level=warning msg="cleaning up after shim disconnected" id=f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25 namespace=k8s.io Jul 2 07:49:42.219199 env[1193]: time="2024-07-02T07:49:42.219186175Z" level=info msg="cleaning up dead shim" Jul 2 07:49:42.231596 env[1193]: time="2024-07-02T07:49:42.231543196Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3090 runtime=io.containerd.runc.v2\n" Jul 2 07:49:42.231946 env[1193]: time="2024-07-02T07:49:42.231916156Z" level=info msg="TearDown network for sandbox \"f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25\" successfully" Jul 2 07:49:42.232014 env[1193]: time="2024-07-02T07:49:42.231945006Z" level=info msg="StopPodSandbox for \"f4d1eb3b1708516f52c17540d01fdf6517bb575df3ca98f5dfbcf41eeca49e25\" returns successfully" Jul 2 07:49:42.273939 kubelet[1413]: E0702 07:49:42.273882 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:42.315614 kubelet[1413]: I0702 07:49:42.315554 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-lib-modules\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.315614 kubelet[1413]: I0702 07:49:42.315626 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-run\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.315933 kubelet[1413]: I0702 07:49:42.315656 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-cgroup\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.315933 kubelet[1413]: I0702 07:49:42.315673 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:42.315933 kubelet[1413]: I0702 07:49:42.315700 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-config-path\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.315933 kubelet[1413]: I0702 07:49:42.315791 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4eb2eeca-d245-4f3e-860e-0052a2310f61-clustermesh-secrets\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.315933 kubelet[1413]: I0702 07:49:42.315821 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4eb2eeca-d245-4f3e-860e-0052a2310f61-hubble-tls\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.315933 kubelet[1413]: I0702 07:49:42.315847 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-host-proc-sys-kernel\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.315933 kubelet[1413]: I0702 07:49:42.315868 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-host-proc-sys-net\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.315933 kubelet[1413]: I0702 07:49:42.315886 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-hostproc\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.315933 kubelet[1413]: I0702 07:49:42.315903 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-bpf-maps\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.315933 kubelet[1413]: I0702 07:49:42.315932 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cni-path\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.316291 kubelet[1413]: I0702 07:49:42.315957 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-xtables-lock\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.316291 kubelet[1413]: I0702 07:49:42.315981 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-ipsec-secrets\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.316291 kubelet[1413]: I0702 07:49:42.316048 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-etc-cni-netd\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.316291 kubelet[1413]: I0702 07:49:42.316072 1413 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sx4t\" (UniqueName: \"kubernetes.io/projected/4eb2eeca-d245-4f3e-860e-0052a2310f61-kube-api-access-5sx4t\") pod \"4eb2eeca-d245-4f3e-860e-0052a2310f61\" (UID: \"4eb2eeca-d245-4f3e-860e-0052a2310f61\") " Jul 2 07:49:42.316291 kubelet[1413]: I0702 07:49:42.316121 1413 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-lib-modules\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.319009 kubelet[1413]: I0702 07:49:42.316496 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:42.319009 kubelet[1413]: I0702 07:49:42.316544 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:42.319009 kubelet[1413]: I0702 07:49:42.316564 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-hostproc" (OuterVolumeSpecName: "hostproc") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:42.319009 kubelet[1413]: I0702 07:49:42.316843 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:42.319009 kubelet[1413]: I0702 07:49:42.316871 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cni-path" (OuterVolumeSpecName: "cni-path") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:42.319009 kubelet[1413]: I0702 07:49:42.316897 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:42.319009 kubelet[1413]: I0702 07:49:42.317436 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:42.319009 kubelet[1413]: I0702 07:49:42.317473 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:42.319009 kubelet[1413]: I0702 07:49:42.317498 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:49:42.319365 kubelet[1413]: I0702 07:49:42.319041 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:49:42.319964 kubelet[1413]: I0702 07:49:42.319902 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb2eeca-d245-4f3e-860e-0052a2310f61-kube-api-access-5sx4t" (OuterVolumeSpecName: "kube-api-access-5sx4t") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "kube-api-access-5sx4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:49:42.321845 kubelet[1413]: I0702 07:49:42.321800 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb2eeca-d245-4f3e-860e-0052a2310f61-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:49:42.322136 kubelet[1413]: I0702 07:49:42.322080 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb2eeca-d245-4f3e-860e-0052a2310f61-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:49:42.322231 kubelet[1413]: I0702 07:49:42.322174 1413 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4eb2eeca-d245-4f3e-860e-0052a2310f61" (UID: "4eb2eeca-d245-4f3e-860e-0052a2310f61"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:49:42.322744 systemd[1]: var-lib-kubelet-pods-4eb2eeca\x2dd245\x2d4f3e\x2d860e\x2d0052a2310f61-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5sx4t.mount: Deactivated successfully. Jul 2 07:49:42.322873 systemd[1]: var-lib-kubelet-pods-4eb2eeca\x2dd245\x2d4f3e\x2d860e\x2d0052a2310f61-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:49:42.322963 systemd[1]: var-lib-kubelet-pods-4eb2eeca\x2dd245\x2d4f3e\x2d860e\x2d0052a2310f61-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:49:42.325952 systemd[1]: var-lib-kubelet-pods-4eb2eeca\x2dd245\x2d4f3e\x2d860e\x2d0052a2310f61-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416563 1413 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-ipsec-secrets\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416603 1413 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cni-path\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416615 1413 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-xtables-lock\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416624 1413 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-etc-cni-netd\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416634 1413 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5sx4t\" (UniqueName: \"kubernetes.io/projected/4eb2eeca-d245-4f3e-860e-0052a2310f61-kube-api-access-5sx4t\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416642 1413 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-run\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416652 1413 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-cgroup\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416663 1413 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4eb2eeca-d245-4f3e-860e-0052a2310f61-cilium-config-path\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416671 1413 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4eb2eeca-d245-4f3e-860e-0052a2310f61-clustermesh-secrets\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416679 1413 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4eb2eeca-d245-4f3e-860e-0052a2310f61-hubble-tls\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416688 1413 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-host-proc-sys-net\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416696 1413 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-hostproc\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416706 1413 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-bpf-maps\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.416712 kubelet[1413]: I0702 07:49:42.416716 1413 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4eb2eeca-d245-4f3e-860e-0052a2310f61-host-proc-sys-kernel\") on node \"10.0.0.92\" DevicePath \"\"" Jul 2 07:49:42.970799 kubelet[1413]: I0702 07:49:42.970758 1413 setters.go:552] "Node became not ready" node="10.0.0.92" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T07:49:42Z","lastTransitionTime":"2024-07-02T07:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 07:49:43.025899 systemd[1]: Removed slice kubepods-burstable-pod4eb2eeca_d245_4f3e_860e_0052a2310f61.slice. Jul 2 07:49:43.151257 kubelet[1413]: I0702 07:49:43.151225 1413 scope.go:117] "RemoveContainer" containerID="a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668" Jul 2 07:49:43.152202 env[1193]: time="2024-07-02T07:49:43.152162245Z" level=info msg="RemoveContainer for \"a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668\"" Jul 2 07:49:43.274701 kubelet[1413]: E0702 07:49:43.274611 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:44.108704 kubelet[1413]: I0702 07:49:44.108645 1413 topology_manager.go:215] "Topology Admit Handler" podUID="ac9a2839-6aae-4dfb-8252-334945724d58" podNamespace="kube-system" podName="cilium-5glbg" Jul 2 07:49:44.108704 kubelet[1413]: E0702 07:49:44.108699 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4eb2eeca-d245-4f3e-860e-0052a2310f61" containerName="mount-cgroup" Jul 2 07:49:44.108704 kubelet[1413]: I0702 07:49:44.108720 1413 memory_manager.go:346] "RemoveStaleState removing state" podUID="4eb2eeca-d245-4f3e-860e-0052a2310f61" containerName="mount-cgroup" Jul 2 07:49:44.114305 systemd[1]: Created slice kubepods-burstable-podac9a2839_6aae_4dfb_8252_334945724d58.slice. Jul 2 07:49:44.224900 kubelet[1413]: I0702 07:49:44.224842 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac9a2839-6aae-4dfb-8252-334945724d58-host-proc-sys-kernel\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.224900 kubelet[1413]: I0702 07:49:44.224879 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac9a2839-6aae-4dfb-8252-334945724d58-hostproc\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.224900 kubelet[1413]: I0702 07:49:44.224897 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac9a2839-6aae-4dfb-8252-334945724d58-lib-modules\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.224900 kubelet[1413]: I0702 07:49:44.224915 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac9a2839-6aae-4dfb-8252-334945724d58-host-proc-sys-net\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.225236 kubelet[1413]: I0702 07:49:44.224934 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfntk\" (UniqueName: \"kubernetes.io/projected/ac9a2839-6aae-4dfb-8252-334945724d58-kube-api-access-zfntk\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.225236 kubelet[1413]: I0702 07:49:44.224952 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac9a2839-6aae-4dfb-8252-334945724d58-bpf-maps\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.225236 kubelet[1413]: I0702 07:49:44.225071 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac9a2839-6aae-4dfb-8252-334945724d58-hubble-tls\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.225236 kubelet[1413]: I0702 07:49:44.225130 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ac9a2839-6aae-4dfb-8252-334945724d58-cilium-ipsec-secrets\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.225236 kubelet[1413]: I0702 07:49:44.225151 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac9a2839-6aae-4dfb-8252-334945724d58-etc-cni-netd\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.225236 kubelet[1413]: I0702 07:49:44.225178 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac9a2839-6aae-4dfb-8252-334945724d58-xtables-lock\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.225236 kubelet[1413]: I0702 07:49:44.225199 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac9a2839-6aae-4dfb-8252-334945724d58-clustermesh-secrets\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.225236 kubelet[1413]: I0702 07:49:44.225224 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac9a2839-6aae-4dfb-8252-334945724d58-cilium-cgroup\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.225428 kubelet[1413]: I0702 07:49:44.225298 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac9a2839-6aae-4dfb-8252-334945724d58-cni-path\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.225428 kubelet[1413]: I0702 07:49:44.225349 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac9a2839-6aae-4dfb-8252-334945724d58-cilium-run\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.225428 kubelet[1413]: I0702 07:49:44.225386 1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac9a2839-6aae-4dfb-8252-334945724d58-cilium-config-path\") pod \"cilium-5glbg\" (UID: \"ac9a2839-6aae-4dfb-8252-334945724d58\") " pod="kube-system/cilium-5glbg" Jul 2 07:49:44.265385 kubelet[1413]: W0702 07:49:44.265320 1413 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eb2eeca_d245_4f3e_860e_0052a2310f61.slice/cri-containerd-a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668.scope WatchSource:0}: task a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668 not found: not found Jul 2 07:49:44.274953 kubelet[1413]: E0702 07:49:44.274928 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:44.368907 env[1193]: time="2024-07-02T07:49:44.368806257Z" level=info msg="RemoveContainer for \"a58d54954e5e178717ed75a6187817a372c574bdd06a76b186f7e53929d7c668\" returns successfully" Jul 2 07:49:44.728333 kubelet[1413]: E0702 07:49:44.728191 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:44.728756 env[1193]: time="2024-07-02T07:49:44.728693920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5glbg,Uid:ac9a2839-6aae-4dfb-8252-334945724d58,Namespace:kube-system,Attempt:0,}" Jul 2 07:49:44.932926 env[1193]: time="2024-07-02T07:49:44.932850751Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:45.011423 env[1193]: time="2024-07-02T07:49:45.011242312Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:45.023208 kubelet[1413]: I0702 07:49:45.023163 1413 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4eb2eeca-d245-4f3e-860e-0052a2310f61" path="/var/lib/kubelet/pods/4eb2eeca-d245-4f3e-860e-0052a2310f61/volumes" Jul 2 07:49:45.043622 env[1193]: time="2024-07-02T07:49:45.043573430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:49:45.043799 env[1193]: time="2024-07-02T07:49:45.043763670Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:49:45.046788 env[1193]: time="2024-07-02T07:49:45.046751806Z" level=info msg="CreateContainer within sandbox \"de00c16c8d70267764a8bad4b2c0351ee88021940e2b7150f5c5c781b92fa3ee\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:49:45.048637 env[1193]: time="2024-07-02T07:49:45.048562138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:49:45.048637 env[1193]: time="2024-07-02T07:49:45.048608523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:49:45.048637 env[1193]: time="2024-07-02T07:49:45.048620297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:49:45.048891 env[1193]: time="2024-07-02T07:49:45.048824766Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681 pid=3118 runtime=io.containerd.runc.v2 Jul 2 07:49:45.059768 systemd[1]: Started cri-containerd-e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681.scope. Jul 2 07:49:45.075715 env[1193]: time="2024-07-02T07:49:45.075662339Z" level=info msg="CreateContainer within sandbox \"de00c16c8d70267764a8bad4b2c0351ee88021940e2b7150f5c5c781b92fa3ee\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"246d2cae68e1cecc9a6cb5db86dc97d455af62f478398c8c4a48efa88a4ca507\"" Jul 2 07:49:45.076766 env[1193]: time="2024-07-02T07:49:45.076704687Z" level=info msg="StartContainer for \"246d2cae68e1cecc9a6cb5db86dc97d455af62f478398c8c4a48efa88a4ca507\"" Jul 2 07:49:45.085955 env[1193]: time="2024-07-02T07:49:45.085905861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5glbg,Uid:ac9a2839-6aae-4dfb-8252-334945724d58,Namespace:kube-system,Attempt:0,} returns sandbox id \"e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681\"" Jul 2 07:49:45.086634 kubelet[1413]: E0702 07:49:45.086611 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:45.089447 env[1193]: time="2024-07-02T07:49:45.089396778Z" level=info msg="CreateContainer within sandbox \"e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:49:45.094173 systemd[1]: Started cri-containerd-246d2cae68e1cecc9a6cb5db86dc97d455af62f478398c8c4a48efa88a4ca507.scope. Jul 2 07:49:45.108577 env[1193]: time="2024-07-02T07:49:45.108525677Z" level=info msg="CreateContainer within sandbox \"e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9326677b6aff7abd2f2b3335cf1fe8d026ea13de54a42e3a36ba74ef4e1def91\"" Jul 2 07:49:45.109502 env[1193]: time="2024-07-02T07:49:45.109457547Z" level=info msg="StartContainer for \"9326677b6aff7abd2f2b3335cf1fe8d026ea13de54a42e3a36ba74ef4e1def91\"" Jul 2 07:49:45.120967 env[1193]: time="2024-07-02T07:49:45.120908676Z" level=info msg="StartContainer for \"246d2cae68e1cecc9a6cb5db86dc97d455af62f478398c8c4a48efa88a4ca507\" returns successfully" Jul 2 07:49:45.126652 systemd[1]: Started cri-containerd-9326677b6aff7abd2f2b3335cf1fe8d026ea13de54a42e3a36ba74ef4e1def91.scope. Jul 2 07:49:45.157391 env[1193]: time="2024-07-02T07:49:45.157322553Z" level=info msg="StartContainer for \"9326677b6aff7abd2f2b3335cf1fe8d026ea13de54a42e3a36ba74ef4e1def91\" returns successfully" Jul 2 07:49:45.162124 kubelet[1413]: E0702 07:49:45.161483 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:45.166250 systemd[1]: cri-containerd-9326677b6aff7abd2f2b3335cf1fe8d026ea13de54a42e3a36ba74ef4e1def91.scope: Deactivated successfully. Jul 2 07:49:45.168712 kubelet[1413]: I0702 07:49:45.168682 1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-9xdg8" podStartSLOduration=1.22408415 podCreationTimestamp="2024-07-02 07:49:40 +0000 UTC" firstStartedPulling="2024-07-02 07:49:41.100395619 +0000 UTC m=+61.113664113" lastFinishedPulling="2024-07-02 07:49:45.044948068 +0000 UTC m=+65.058216552" observedRunningTime="2024-07-02 07:49:45.168083204 +0000 UTC m=+65.181351698" watchObservedRunningTime="2024-07-02 07:49:45.168636589 +0000 UTC m=+65.181905083" Jul 2 07:49:45.219505 env[1193]: time="2024-07-02T07:49:45.219446310Z" level=info msg="shim disconnected" id=9326677b6aff7abd2f2b3335cf1fe8d026ea13de54a42e3a36ba74ef4e1def91 Jul 2 07:49:45.219505 env[1193]: time="2024-07-02T07:49:45.219493978Z" level=warning msg="cleaning up after shim disconnected" id=9326677b6aff7abd2f2b3335cf1fe8d026ea13de54a42e3a36ba74ef4e1def91 namespace=k8s.io Jul 2 07:49:45.219505 env[1193]: time="2024-07-02T07:49:45.219502957Z" level=info msg="cleaning up dead shim" Jul 2 07:49:45.226903 env[1193]: time="2024-07-02T07:49:45.226853437Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3242 runtime=io.containerd.runc.v2\n" Jul 2 07:49:45.276236 kubelet[1413]: E0702 07:49:45.276072 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:46.076780 kubelet[1413]: E0702 07:49:46.076733 1413 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:49:46.163960 kubelet[1413]: E0702 07:49:46.163931 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:46.164183 kubelet[1413]: E0702 07:49:46.164035 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:46.165441 env[1193]: time="2024-07-02T07:49:46.165399540Z" level=info msg="CreateContainer within sandbox \"e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:49:46.185423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316904299.mount: Deactivated successfully. Jul 2 07:49:46.189252 env[1193]: time="2024-07-02T07:49:46.189166493Z" level=info msg="CreateContainer within sandbox \"e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"230bfe02b62a6e20e5bdaeda08422046f94ed49908250ad9d011876529bc43db\"" Jul 2 07:49:46.189818 env[1193]: time="2024-07-02T07:49:46.189775660Z" level=info msg="StartContainer for \"230bfe02b62a6e20e5bdaeda08422046f94ed49908250ad9d011876529bc43db\"" Jul 2 07:49:46.205967 systemd[1]: Started cri-containerd-230bfe02b62a6e20e5bdaeda08422046f94ed49908250ad9d011876529bc43db.scope. Jul 2 07:49:46.244581 env[1193]: time="2024-07-02T07:49:46.244503939Z" level=info msg="StartContainer for \"230bfe02b62a6e20e5bdaeda08422046f94ed49908250ad9d011876529bc43db\" returns successfully" Jul 2 07:49:46.251685 systemd[1]: cri-containerd-230bfe02b62a6e20e5bdaeda08422046f94ed49908250ad9d011876529bc43db.scope: Deactivated successfully. Jul 2 07:49:46.272692 env[1193]: time="2024-07-02T07:49:46.272624802Z" level=info msg="shim disconnected" id=230bfe02b62a6e20e5bdaeda08422046f94ed49908250ad9d011876529bc43db Jul 2 07:49:46.272692 env[1193]: time="2024-07-02T07:49:46.272687270Z" level=warning msg="cleaning up after shim disconnected" id=230bfe02b62a6e20e5bdaeda08422046f94ed49908250ad9d011876529bc43db namespace=k8s.io Jul 2 07:49:46.272894 env[1193]: time="2024-07-02T07:49:46.272702672Z" level=info msg="cleaning up dead shim" Jul 2 07:49:46.276680 kubelet[1413]: E0702 07:49:46.276628 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:46.278980 env[1193]: time="2024-07-02T07:49:46.278937820Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3304 runtime=io.containerd.runc.v2\n" Jul 2 07:49:46.331090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-230bfe02b62a6e20e5bdaeda08422046f94ed49908250ad9d011876529bc43db-rootfs.mount: Deactivated successfully. Jul 2 07:49:47.167070 kubelet[1413]: E0702 07:49:47.167034 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:47.168753 env[1193]: time="2024-07-02T07:49:47.168709108Z" level=info msg="CreateContainer within sandbox \"e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:49:47.276961 kubelet[1413]: E0702 07:49:47.276889 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:47.406697 env[1193]: time="2024-07-02T07:49:47.406605270Z" level=info msg="CreateContainer within sandbox \"e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cd78eb44a3cb4ad679675f55922a5d35d307fbd3499e32d03e933dd6f7353c62\"" Jul 2 07:49:47.407402 env[1193]: time="2024-07-02T07:49:47.407365213Z" level=info msg="StartContainer for \"cd78eb44a3cb4ad679675f55922a5d35d307fbd3499e32d03e933dd6f7353c62\"" Jul 2 07:49:47.428029 systemd[1]: Started cri-containerd-cd78eb44a3cb4ad679675f55922a5d35d307fbd3499e32d03e933dd6f7353c62.scope. Jul 2 07:49:47.517682 systemd[1]: cri-containerd-cd78eb44a3cb4ad679675f55922a5d35d307fbd3499e32d03e933dd6f7353c62.scope: Deactivated successfully. Jul 2 07:49:47.548012 env[1193]: time="2024-07-02T07:49:47.547928973Z" level=info msg="StartContainer for \"cd78eb44a3cb4ad679675f55922a5d35d307fbd3499e32d03e933dd6f7353c62\" returns successfully" Jul 2 07:49:47.719755 env[1193]: time="2024-07-02T07:49:47.719582758Z" level=info msg="shim disconnected" id=cd78eb44a3cb4ad679675f55922a5d35d307fbd3499e32d03e933dd6f7353c62 Jul 2 07:49:47.719755 env[1193]: time="2024-07-02T07:49:47.719642941Z" level=warning msg="cleaning up after shim disconnected" id=cd78eb44a3cb4ad679675f55922a5d35d307fbd3499e32d03e933dd6f7353c62 namespace=k8s.io Jul 2 07:49:47.719755 env[1193]: time="2024-07-02T07:49:47.719654474Z" level=info msg="cleaning up dead shim" Jul 2 07:49:47.727539 env[1193]: time="2024-07-02T07:49:47.727503817Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3360 runtime=io.containerd.runc.v2\n" Jul 2 07:49:48.171481 kubelet[1413]: E0702 07:49:48.171412 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:48.173738 env[1193]: time="2024-07-02T07:49:48.173666050Z" level=info msg="CreateContainer within sandbox \"e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:49:48.195347 env[1193]: time="2024-07-02T07:49:48.195276836Z" level=info msg="CreateContainer within sandbox \"e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"290f65a2585e9855d4294f7b202683021a559d8a3ade1ca9998a8b1b995d5c1e\"" Jul 2 07:49:48.195865 env[1193]: time="2024-07-02T07:49:48.195835916Z" level=info msg="StartContainer for \"290f65a2585e9855d4294f7b202683021a559d8a3ade1ca9998a8b1b995d5c1e\"" Jul 2 07:49:48.209906 systemd[1]: Started cri-containerd-290f65a2585e9855d4294f7b202683021a559d8a3ade1ca9998a8b1b995d5c1e.scope. Jul 2 07:49:48.232615 systemd[1]: cri-containerd-290f65a2585e9855d4294f7b202683021a559d8a3ade1ca9998a8b1b995d5c1e.scope: Deactivated successfully. Jul 2 07:49:48.233909 env[1193]: time="2024-07-02T07:49:48.233840902Z" level=info msg="StartContainer for \"290f65a2585e9855d4294f7b202683021a559d8a3ade1ca9998a8b1b995d5c1e\" returns successfully" Jul 2 07:49:48.253357 env[1193]: time="2024-07-02T07:49:48.253282736Z" level=info msg="shim disconnected" id=290f65a2585e9855d4294f7b202683021a559d8a3ade1ca9998a8b1b995d5c1e Jul 2 07:49:48.253357 env[1193]: time="2024-07-02T07:49:48.253336766Z" level=warning msg="cleaning up after shim disconnected" id=290f65a2585e9855d4294f7b202683021a559d8a3ade1ca9998a8b1b995d5c1e namespace=k8s.io Jul 2 07:49:48.253357 env[1193]: time="2024-07-02T07:49:48.253346175Z" level=info msg="cleaning up dead shim" Jul 2 07:49:48.260026 env[1193]: time="2024-07-02T07:49:48.259977918Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:49:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3412 runtime=io.containerd.runc.v2\n" Jul 2 07:49:48.277500 kubelet[1413]: E0702 07:49:48.277446 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:48.338251 systemd[1]: run-containerd-runc-k8s.io-cd78eb44a3cb4ad679675f55922a5d35d307fbd3499e32d03e933dd6f7353c62-runc.zW9VIO.mount: Deactivated successfully. Jul 2 07:49:48.338345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd78eb44a3cb4ad679675f55922a5d35d307fbd3499e32d03e933dd6f7353c62-rootfs.mount: Deactivated successfully. Jul 2 07:49:49.176120 kubelet[1413]: E0702 07:49:49.176072 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:49.178455 env[1193]: time="2024-07-02T07:49:49.178400682Z" level=info msg="CreateContainer within sandbox \"e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:49:49.194806 env[1193]: time="2024-07-02T07:49:49.194737391Z" level=info msg="CreateContainer within sandbox \"e732960c825304a4c330872897473d7c4104b28b9e57cec12e22a9de6c9ff681\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"277de5ea977f93c3d84b7dfb43e281111af4cf17f5223cb32cbe5caeff060780\"" Jul 2 07:49:49.195361 env[1193]: time="2024-07-02T07:49:49.195326741Z" level=info msg="StartContainer for \"277de5ea977f93c3d84b7dfb43e281111af4cf17f5223cb32cbe5caeff060780\"" Jul 2 07:49:49.213189 systemd[1]: Started cri-containerd-277de5ea977f93c3d84b7dfb43e281111af4cf17f5223cb32cbe5caeff060780.scope. Jul 2 07:49:49.239493 env[1193]: time="2024-07-02T07:49:49.239422582Z" level=info msg="StartContainer for \"277de5ea977f93c3d84b7dfb43e281111af4cf17f5223cb32cbe5caeff060780\" returns successfully" Jul 2 07:49:49.278514 kubelet[1413]: E0702 07:49:49.278453 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:49.575034 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:49:50.180640 kubelet[1413]: E0702 07:49:50.180594 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:50.192656 kubelet[1413]: I0702 07:49:50.192611 1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5glbg" podStartSLOduration=7.1925710370000004 podCreationTimestamp="2024-07-02 07:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:49:50.191659475 +0000 UTC m=+70.204927979" watchObservedRunningTime="2024-07-02 07:49:50.192571037 +0000 UTC m=+70.205839521" Jul 2 07:49:50.279415 kubelet[1413]: E0702 07:49:50.279359 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:51.186968 kubelet[1413]: E0702 07:49:51.186932 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:51.279761 kubelet[1413]: E0702 07:49:51.279678 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:52.066946 systemd-networkd[1031]: lxc_health: Link UP Jul 2 07:49:52.080252 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:49:52.079759 systemd-networkd[1031]: lxc_health: Gained carrier Jul 2 07:49:52.188799 kubelet[1413]: E0702 07:49:52.188746 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:52.280005 kubelet[1413]: E0702 07:49:52.279936 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:52.542969 kubelet[1413]: E0702 07:49:52.542935 1413 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35274->127.0.0.1:36473: write tcp 127.0.0.1:35274->127.0.0.1:36473: write: connection reset by peer Jul 2 07:49:53.190104 kubelet[1413]: E0702 07:49:53.190069 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:53.281019 kubelet[1413]: E0702 07:49:53.280970 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:53.329209 systemd-networkd[1031]: lxc_health: Gained IPv6LL Jul 2 07:49:54.191516 kubelet[1413]: E0702 07:49:54.191474 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:54.281638 kubelet[1413]: E0702 07:49:54.281587 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:55.192870 kubelet[1413]: E0702 07:49:55.192826 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:49:55.281921 kubelet[1413]: E0702 07:49:55.281856 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:56.282626 kubelet[1413]: E0702 07:49:56.282561 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:57.283672 kubelet[1413]: E0702 07:49:57.283634 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:58.284659 kubelet[1413]: E0702 07:49:58.284598 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:59.285009 kubelet[1413]: E0702 07:49:59.284951 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:00.234810 kubelet[1413]: E0702 07:50:00.234727 1413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:00.286186 kubelet[1413]: E0702 07:50:00.286077 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"