Jul 2 07:49:48.057963 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:49:48.058005 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:49:48.058023 kernel: BIOS-provided physical RAM map: Jul 2 07:49:48.058037 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jul 2 07:49:48.058050 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jul 2 07:49:48.058063 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jul 2 07:49:48.058083 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jul 2 07:49:48.058097 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jul 2 07:49:48.058111 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jul 2 07:49:48.058124 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jul 2 07:49:48.058138 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jul 2 07:49:48.058151 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jul 2 07:49:48.058165 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jul 2 07:49:48.058179 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jul 2 07:49:48.058210 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jul 2 07:49:48.058226 kernel: NX (Execute Disable) protection: active Jul 2 07:49:48.058241 kernel: efi: EFI v2.70 by EDK II Jul 2 07:49:48.058257 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd2d2018 Jul 2 07:49:48.058272 kernel: random: crng init done Jul 2 07:49:48.058288 kernel: SMBIOS 2.4 present. Jul 2 07:49:48.058303 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024 Jul 2 07:49:48.058318 kernel: Hypervisor detected: KVM Jul 2 07:49:48.058337 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:49:48.058353 kernel: kvm-clock: cpu 0, msr 204192001, primary cpu clock Jul 2 07:49:48.058368 kernel: kvm-clock: using sched offset of 12205698658 cycles Jul 2 07:49:48.058384 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:49:48.058399 kernel: tsc: Detected 2299.998 MHz processor Jul 2 07:49:48.058415 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:49:48.058431 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:49:48.058446 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jul 2 07:49:48.058461 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:49:48.058476 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jul 2 07:49:48.058495 kernel: Using GB pages for direct mapping Jul 2 07:49:48.058510 kernel: Secure boot disabled Jul 2 07:49:48.058526 kernel: ACPI: Early table checksum verification disabled Jul 2 07:49:48.058542 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jul 2 07:49:48.058556 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jul 2 07:49:48.058571 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jul 2 07:49:48.058586 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jul 2 07:49:48.058601 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jul 2 07:49:48.058627 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Jul 2 07:49:48.058644 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jul 2 07:49:48.058661 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jul 2 07:49:48.058676 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jul 2 07:49:48.058693 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jul 2 07:49:48.058710 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jul 2 07:49:48.058729 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jul 2 07:49:48.058746 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jul 2 07:49:48.058762 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jul 2 07:49:48.058779 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jul 2 07:49:48.058795 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jul 2 07:49:48.058811 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jul 2 07:49:48.058828 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jul 2 07:49:48.058844 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jul 2 07:49:48.058860 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jul 2 07:49:48.058905 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 07:49:48.058921 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 07:49:48.058937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 07:49:48.058954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jul 2 07:49:48.058970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jul 2 07:49:48.058987 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jul 2 07:49:48.059003 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jul 2 07:49:48.059027 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jul 2 07:49:48.059044 kernel: Zone ranges: Jul 2 07:49:48.059066 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:49:48.059083 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 07:49:48.059099 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jul 2 07:49:48.059116 kernel: Movable zone start for each node Jul 2 07:49:48.059132 kernel: Early memory node ranges Jul 2 07:49:48.059149 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jul 2 07:49:48.059165 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jul 2 07:49:48.059182 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jul 2 07:49:48.059206 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jul 2 07:49:48.059227 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jul 2 07:49:48.059243 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jul 2 07:49:48.059260 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:49:48.059277 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jul 2 07:49:48.059294 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jul 2 07:49:48.059310 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 2 07:49:48.059326 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jul 2 07:49:48.059342 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 07:49:48.059359 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:49:48.059380 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:49:48.059396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:49:48.059413 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:49:48.059430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:49:48.059447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:49:48.059462 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:49:48.059479 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 07:49:48.059495 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 2 07:49:48.059511 kernel: Booting paravirtualized kernel on KVM Jul 2 07:49:48.059532 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:49:48.059549 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 07:49:48.059565 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 07:49:48.059582 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 07:49:48.059598 kernel: pcpu-alloc: [0] 0 1 Jul 2 07:49:48.059614 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:49:48.059631 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:49:48.059647 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jul 2 07:49:48.059663 kernel: Policy zone: Normal Jul 2 07:49:48.059686 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:49:48.059703 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:49:48.059720 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 07:49:48.059737 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:49:48.059753 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:49:48.059770 kernel: Memory: 7516812K/7860584K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 343512K reserved, 0K cma-reserved) Jul 2 07:49:48.059787 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 07:49:48.059803 kernel: Kernel/User page tables isolation: enabled Jul 2 07:49:48.059823 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:49:48.059839 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:49:48.059856 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:49:48.059891 kernel: rcu: RCU event tracing is enabled. Jul 2 07:49:48.059908 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 07:49:48.059925 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:49:48.059940 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:49:48.059957 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:49:48.059973 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 07:49:48.059994 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 07:49:48.060025 kernel: Console: colour dummy device 80x25 Jul 2 07:49:48.060042 kernel: printk: console [ttyS0] enabled Jul 2 07:49:48.060063 kernel: ACPI: Core revision 20210730 Jul 2 07:49:48.060080 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:49:48.060097 kernel: x2apic enabled Jul 2 07:49:48.060115 kernel: Switched APIC routing to physical x2apic. Jul 2 07:49:48.060133 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jul 2 07:49:48.060151 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 2 07:49:48.060169 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jul 2 07:49:48.060190 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jul 2 07:49:48.060216 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jul 2 07:49:48.060233 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:49:48.060251 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 07:49:48.060268 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 07:49:48.060292 kernel: Spectre V2 : Mitigation: IBRS Jul 2 07:49:48.060313 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:49:48.060331 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:49:48.060348 kernel: RETBleed: Mitigation: IBRS Jul 2 07:49:48.060365 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:49:48.060383 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Jul 2 07:49:48.060400 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:49:48.060418 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 07:49:48.060436 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:49:48.060452 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:49:48.060473 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:49:48.060490 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:49:48.060508 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:49:48.060525 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:49:48.060543 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:49:48.060560 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:49:48.060577 kernel: LSM: Security Framework initializing Jul 2 07:49:48.060594 kernel: SELinux: Initializing. Jul 2 07:49:48.060612 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:49:48.060633 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:49:48.060651 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jul 2 07:49:48.060669 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jul 2 07:49:48.060686 kernel: signal: max sigframe size: 1776 Jul 2 07:49:48.060704 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:49:48.060721 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 07:49:48.060738 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:49:48.060755 kernel: x86: Booting SMP configuration: Jul 2 07:49:48.060772 kernel: .... node #0, CPUs: #1 Jul 2 07:49:48.060793 kernel: kvm-clock: cpu 1, msr 204192041, secondary cpu clock Jul 2 07:49:48.060811 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 07:49:48.060830 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 07:49:48.060847 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 07:49:48.060875 kernel: smpboot: Max logical packages: 1 Jul 2 07:49:48.060893 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 2 07:49:48.060910 kernel: devtmpfs: initialized Jul 2 07:49:48.060928 kernel: x86/mm: Memory block size: 128MB Jul 2 07:49:48.060945 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jul 2 07:49:48.060967 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:49:48.060984 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 07:49:48.061002 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:49:48.061019 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:49:48.061037 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:49:48.061054 kernel: audit: type=2000 audit(1719906586.966:1): state=initialized audit_enabled=0 res=1 Jul 2 07:49:48.061071 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:49:48.061087 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:49:48.061103 kernel: cpuidle: using governor menu Jul 2 07:49:48.061123 kernel: ACPI: bus type PCI registered Jul 2 07:49:48.061140 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:49:48.061158 kernel: dca service started, version 1.12.1 Jul 2 07:49:48.061176 kernel: PCI: Using configuration type 1 for base access Jul 2 07:49:48.061193 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:49:48.061217 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:49:48.061235 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:49:48.061252 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:49:48.061270 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:49:48.061291 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:49:48.061309 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:49:48.061326 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:49:48.061344 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:49:48.061361 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:49:48.061378 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 07:49:48.061396 kernel: ACPI: Interpreter enabled Jul 2 07:49:48.061413 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:49:48.061430 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:49:48.061450 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:49:48.061467 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 07:49:48.061484 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:49:48.061697 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:49:48.061858 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 07:49:48.061900 kernel: PCI host bridge to bus 0000:00 Jul 2 07:49:48.062053 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:49:48.062205 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:49:48.062353 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:49:48.062498 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jul 2 07:49:48.062646 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:49:48.062833 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:49:48.063022 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jul 2 07:49:48.063210 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:49:48.063378 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 07:49:48.063554 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jul 2 07:49:48.063723 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jul 2 07:49:48.063909 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jul 2 07:49:48.064095 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:49:48.064270 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jul 2 07:49:48.064443 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jul 2 07:49:48.064616 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:49:48.064783 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:49:48.064962 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jul 2 07:49:48.064986 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:49:48.065002 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:49:48.065017 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:49:48.065038 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:49:48.065055 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:49:48.065071 kernel: iommu: Default domain type: Translated Jul 2 07:49:48.065088 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:49:48.065106 kernel: vgaarb: loaded Jul 2 07:49:48.065124 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:49:48.065141 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:49:48.065158 kernel: PTP clock support registered Jul 2 07:49:48.065175 kernel: Registered efivars operations Jul 2 07:49:48.065196 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:49:48.065234 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:49:48.065251 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jul 2 07:49:48.065268 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jul 2 07:49:48.065283 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jul 2 07:49:48.065300 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jul 2 07:49:48.065316 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:49:48.065332 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:49:48.065350 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:49:48.065371 kernel: pnp: PnP ACPI init Jul 2 07:49:48.065389 kernel: pnp: PnP ACPI: found 7 devices Jul 2 07:49:48.065406 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:49:48.065423 kernel: NET: Registered PF_INET protocol family Jul 2 07:49:48.065440 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 07:49:48.065456 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 07:49:48.065474 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:49:48.065491 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:49:48.065509 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 07:49:48.065529 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 07:49:48.065546 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:49:48.065561 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:49:48.065576 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:49:48.065592 kernel: NET: Registered PF_XDP protocol family Jul 2 07:49:48.066111 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:49:48.066301 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:49:48.066451 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:49:48.066603 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jul 2 07:49:48.066772 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:49:48.066796 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:49:48.066814 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 07:49:48.066832 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Jul 2 07:49:48.066850 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 07:49:48.066880 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 2 07:49:48.066898 kernel: clocksource: Switched to clocksource tsc Jul 2 07:49:48.066920 kernel: Initialise system trusted keyrings Jul 2 07:49:48.066937 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 07:49:48.066954 kernel: Key type asymmetric registered Jul 2 07:49:48.066972 kernel: Asymmetric key parser 'x509' registered Jul 2 07:49:48.066989 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:49:48.067006 kernel: io scheduler mq-deadline registered Jul 2 07:49:48.067023 kernel: io scheduler kyber registered Jul 2 07:49:48.067040 kernel: io scheduler bfq registered Jul 2 07:49:48.067057 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:49:48.067079 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:49:48.067269 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jul 2 07:49:48.067291 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:49:48.067457 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jul 2 07:49:48.067480 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:49:48.067644 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jul 2 07:49:48.067666 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:49:48.067684 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:49:48.067702 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 07:49:48.067723 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jul 2 07:49:48.067741 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jul 2 07:49:48.067990 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jul 2 07:49:48.068017 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:49:48.068035 kernel: i8042: Warning: Keylock active Jul 2 07:49:48.068052 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:49:48.068070 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:49:48.076022 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 07:49:48.076192 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 07:49:48.076349 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T07:49:47 UTC (1719906587) Jul 2 07:49:48.076488 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 07:49:48.076508 kernel: intel_pstate: CPU model not supported Jul 2 07:49:48.076524 kernel: pstore: Registered efi as persistent store backend Jul 2 07:49:48.076540 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:49:48.076556 kernel: Segment Routing with IPv6 Jul 2 07:49:48.076571 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:49:48.076591 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:49:48.076607 kernel: Key type dns_resolver registered Jul 2 07:49:48.076622 kernel: IPI shorthand broadcast: enabled Jul 2 07:49:48.076639 kernel: sched_clock: Marking stable (685367874, 118424478)->(814205000, -10412648) Jul 2 07:49:48.076655 kernel: registered taskstats version 1 Jul 2 07:49:48.076671 kernel: Loading compiled-in X.509 certificates Jul 2 07:49:48.076686 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:49:48.076703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:49:48.076718 kernel: Key type .fscrypt registered Jul 2 07:49:48.076737 kernel: Key type fscrypt-provisioning registered Jul 2 07:49:48.076753 kernel: pstore: Using crash dump compression: deflate Jul 2 07:49:48.076769 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:49:48.076785 kernel: ima: No architecture policies found Jul 2 07:49:48.076800 kernel: clk: Disabling unused clocks Jul 2 07:49:48.076815 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:49:48.076831 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:49:48.076847 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:49:48.076882 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:49:48.076898 kernel: Run /init as init process Jul 2 07:49:48.076914 kernel: with arguments: Jul 2 07:49:48.076930 kernel: /init Jul 2 07:49:48.076945 kernel: with environment: Jul 2 07:49:48.076960 kernel: HOME=/ Jul 2 07:49:48.076975 kernel: TERM=linux Jul 2 07:49:48.076991 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:49:48.077010 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:49:48.077033 systemd[1]: Detected virtualization kvm. Jul 2 07:49:48.077050 systemd[1]: Detected architecture x86-64. Jul 2 07:49:48.077066 systemd[1]: Running in initrd. Jul 2 07:49:48.077082 systemd[1]: No hostname configured, using default hostname. Jul 2 07:49:48.077098 systemd[1]: Hostname set to . Jul 2 07:49:48.077115 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:49:48.077132 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:49:48.077151 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:49:48.077167 systemd[1]: Reached target cryptsetup.target. Jul 2 07:49:48.077184 systemd[1]: Reached target paths.target. Jul 2 07:49:48.077206 systemd[1]: Reached target slices.target. Jul 2 07:49:48.077222 systemd[1]: Reached target swap.target. Jul 2 07:49:48.077238 systemd[1]: Reached target timers.target. Jul 2 07:49:48.077256 systemd[1]: Listening on iscsid.socket. Jul 2 07:49:48.077275 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:49:48.077292 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:49:48.077308 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:49:48.077325 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:49:48.077341 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:49:48.077358 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:49:48.077374 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:49:48.077390 systemd[1]: Reached target sockets.target. Jul 2 07:49:48.077407 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:49:48.077427 systemd[1]: Finished network-cleanup.service. Jul 2 07:49:48.077443 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:49:48.077460 systemd[1]: Starting systemd-journald.service... Jul 2 07:49:48.077494 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:49:48.077515 systemd[1]: Starting systemd-resolved.service... Jul 2 07:49:48.077532 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:49:48.077549 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:49:48.077570 kernel: audit: type=1130 audit(1719906588.074:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.077591 systemd-journald[189]: Journal started Jul 2 07:49:48.077664 systemd-journald[189]: Runtime Journal (/run/log/journal/e7d5c4a3470cd7ef58c7b9ab4f098c3e) is 8.0M, max 148.8M, 140.8M free. Jul 2 07:49:48.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.079885 systemd[1]: Started systemd-journald.service. Jul 2 07:49:48.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.084919 kernel: audit: type=1130 audit(1719906588.080:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.085446 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:49:48.092746 systemd-modules-load[190]: Inserted module 'overlay' Jul 2 07:49:48.112830 kernel: audit: type=1130 audit(1719906588.092:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.112863 kernel: audit: type=1130 audit(1719906588.099:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.094186 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:49:48.102394 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:49:48.109958 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:49:48.124152 systemd-resolved[191]: Positive Trust Anchors: Jul 2 07:49:48.124182 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:49:48.124248 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:49:48.134254 systemd-resolved[191]: Defaulting to hostname 'linux'. Jul 2 07:49:48.138165 systemd[1]: Started systemd-resolved.service. Jul 2 07:49:48.138348 systemd[1]: Reached target nss-lookup.target. Jul 2 07:49:48.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.141886 kernel: audit: type=1130 audit(1719906588.136:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.146936 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:49:48.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.150891 kernel: audit: type=1130 audit(1719906588.145:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.154782 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:49:48.169971 kernel: audit: type=1130 audit(1719906588.156:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.159163 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:49:48.176882 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:49:48.179719 dracut-cmdline[206]: dracut-dracut-053 Jul 2 07:49:48.183762 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:49:48.192970 kernel: Bridge firewalling registered Jul 2 07:49:48.186638 systemd-modules-load[190]: Inserted module 'br_netfilter' Jul 2 07:49:48.218886 kernel: SCSI subsystem initialized Jul 2 07:49:48.237988 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:49:48.238045 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:49:48.238077 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:49:48.242968 systemd-modules-load[190]: Inserted module 'dm_multipath' Jul 2 07:49:48.244562 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:49:48.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.254060 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:49:48.258053 kernel: audit: type=1130 audit(1719906588.251:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.268087 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:49:48.274981 kernel: audit: type=1130 audit(1719906588.266:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.282896 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:49:48.302888 kernel: iscsi: registered transport (tcp) Jul 2 07:49:48.329168 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:49:48.329225 kernel: QLogic iSCSI HBA Driver Jul 2 07:49:48.372797 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:49:48.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.374849 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:49:48.430919 kernel: raid6: avx2x4 gen() 18235 MB/s Jul 2 07:49:48.447908 kernel: raid6: avx2x4 xor() 8039 MB/s Jul 2 07:49:48.464905 kernel: raid6: avx2x2 gen() 18177 MB/s Jul 2 07:49:48.481910 kernel: raid6: avx2x2 xor() 18481 MB/s Jul 2 07:49:48.498907 kernel: raid6: avx2x1 gen() 13951 MB/s Jul 2 07:49:48.515905 kernel: raid6: avx2x1 xor() 16155 MB/s Jul 2 07:49:48.532909 kernel: raid6: sse2x4 gen() 11085 MB/s Jul 2 07:49:48.549905 kernel: raid6: sse2x4 xor() 6652 MB/s Jul 2 07:49:48.566909 kernel: raid6: sse2x2 gen() 11957 MB/s Jul 2 07:49:48.583905 kernel: raid6: sse2x2 xor() 7439 MB/s Jul 2 07:49:48.600907 kernel: raid6: sse2x1 gen() 10524 MB/s Jul 2 07:49:48.618276 kernel: raid6: sse2x1 xor() 5168 MB/s Jul 2 07:49:48.618316 kernel: raid6: using algorithm avx2x4 gen() 18235 MB/s Jul 2 07:49:48.618338 kernel: raid6: .... xor() 8039 MB/s, rmw enabled Jul 2 07:49:48.618994 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:49:48.633904 kernel: xor: automatically using best checksumming function avx Jul 2 07:49:48.738905 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:49:48.749996 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:49:48.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.749000 audit: BPF prog-id=7 op=LOAD Jul 2 07:49:48.749000 audit: BPF prog-id=8 op=LOAD Jul 2 07:49:48.752285 systemd[1]: Starting systemd-udevd.service... Jul 2 07:49:48.768772 systemd-udevd[390]: Using default interface naming scheme 'v252'. Jul 2 07:49:48.776066 systemd[1]: Started systemd-udevd.service. Jul 2 07:49:48.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.781172 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:49:48.801776 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Jul 2 07:49:48.839660 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:49:48.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.844045 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:49:48.906187 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:49:48.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:48.976891 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:49:49.015891 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:49:49.015972 kernel: AES CTR mode by8 optimization enabled Jul 2 07:49:49.026888 kernel: scsi host0: Virtio SCSI HBA Jul 2 07:49:49.055821 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jul 2 07:49:49.119030 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jul 2 07:49:49.119435 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jul 2 07:49:49.119646 kernel: sd 0:0:1:0: [sda] Write Protect is off Jul 2 07:49:49.121443 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jul 2 07:49:49.121742 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 07:49:49.129202 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:49:49.129262 kernel: GPT:17805311 != 25165823 Jul 2 07:49:49.129285 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:49:49.129305 kernel: GPT:17805311 != 25165823 Jul 2 07:49:49.130406 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:49:49.130432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:49:49.132511 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jul 2 07:49:49.178890 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (446) Jul 2 07:49:49.183653 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:49:49.198956 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:49:49.204186 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:49:49.204398 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:49:49.215132 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:49:49.217059 systemd[1]: Starting disk-uuid.service... Jul 2 07:49:49.226719 disk-uuid[518]: Primary Header is updated. Jul 2 07:49:49.226719 disk-uuid[518]: Secondary Entries is updated. Jul 2 07:49:49.226719 disk-uuid[518]: Secondary Header is updated. Jul 2 07:49:49.236886 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:49:49.254894 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:49:49.261891 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:49:50.259793 disk-uuid[519]: The operation has completed successfully. Jul 2 07:49:50.263981 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:49:50.324358 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:49:50.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.324489 systemd[1]: Finished disk-uuid.service. Jul 2 07:49:50.339788 systemd[1]: Starting verity-setup.service... Jul 2 07:49:50.366888 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 07:49:50.431671 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:49:50.434154 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:49:50.446442 systemd[1]: Finished verity-setup.service. Jul 2 07:49:50.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.530923 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:49:50.530928 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:49:50.538212 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:49:50.598012 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:49:50.598051 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:49:50.598075 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:49:50.598097 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:49:50.539151 systemd[1]: Starting ignition-setup.service... Jul 2 07:49:50.561756 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:49:50.599379 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:49:50.619226 systemd[1]: Finished ignition-setup.service. Jul 2 07:49:50.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.623120 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:49:50.676720 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:49:50.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.676000 audit: BPF prog-id=9 op=LOAD Jul 2 07:49:50.678894 systemd[1]: Starting systemd-networkd.service... Jul 2 07:49:50.713051 systemd-networkd[693]: lo: Link UP Jul 2 07:49:50.713066 systemd-networkd[693]: lo: Gained carrier Jul 2 07:49:50.714385 systemd-networkd[693]: Enumeration completed Jul 2 07:49:50.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.715002 systemd[1]: Started systemd-networkd.service. Jul 2 07:49:50.715127 systemd-networkd[693]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:49:50.717229 systemd-networkd[693]: eth0: Link UP Jul 2 07:49:50.717237 systemd-networkd[693]: eth0: Gained carrier Jul 2 07:49:50.724973 systemd-networkd[693]: eth0: DHCPv4 address 10.128.0.56/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 2 07:49:50.735107 systemd[1]: Reached target network.target. Jul 2 07:49:50.804565 systemd[1]: Starting iscsiuio.service... Jul 2 07:49:50.819154 systemd[1]: Started iscsiuio.service. Jul 2 07:49:50.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.820615 systemd[1]: Starting iscsid.service... Jul 2 07:49:50.839203 systemd[1]: Started iscsid.service. Jul 2 07:49:50.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.861090 iscsid[703]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:49:50.861090 iscsid[703]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 07:49:50.861090 iscsid[703]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:49:50.861090 iscsid[703]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:49:50.861090 iscsid[703]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:49:50.861090 iscsid[703]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:49:50.861090 iscsid[703]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:49:50.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.854216 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:49:51.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.887205 ignition[639]: Ignition 2.14.0 Jul 2 07:49:50.873176 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:49:50.887219 ignition[639]: Stage: fetch-offline Jul 2 07:49:50.880613 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:49:50.887304 ignition[639]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:51.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.915140 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:49:50.887419 ignition[639]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:50.941139 systemd[1]: Reached target remote-fs.target. Jul 2 07:49:50.906386 ignition[639]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:50.957139 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:49:51.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:50.906588 ignition[639]: parsed url from cmdline: "" Jul 2 07:49:50.973419 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:49:50.906593 ignition[639]: no config URL provided Jul 2 07:49:50.990487 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:49:50.906601 ignition[639]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:49:51.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:51.017299 systemd[1]: Starting ignition-fetch.service... Jul 2 07:49:50.906612 ignition[639]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:49:51.051469 unknown[717]: fetched base config from "system" Jul 2 07:49:50.906621 ignition[639]: failed to fetch config: resource requires networking Jul 2 07:49:51.051482 unknown[717]: fetched base config from "system" Jul 2 07:49:50.906762 ignition[639]: Ignition finished successfully Jul 2 07:49:51.051494 unknown[717]: fetched user config from "gcp" Jul 2 07:49:51.029000 ignition[717]: Ignition 2.14.0 Jul 2 07:49:51.053605 systemd[1]: Finished ignition-fetch.service. Jul 2 07:49:51.029010 ignition[717]: Stage: fetch Jul 2 07:49:51.069215 systemd[1]: Starting ignition-kargs.service... Jul 2 07:49:51.029138 ignition[717]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:51.100390 systemd[1]: Finished ignition-kargs.service. Jul 2 07:49:51.029168 ignition[717]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:51.119166 systemd[1]: Starting ignition-disks.service... Jul 2 07:49:51.036567 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:51.142471 systemd[1]: Finished ignition-disks.service. Jul 2 07:49:51.036837 ignition[717]: parsed url from cmdline: "" Jul 2 07:49:51.162343 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:49:51.036887 ignition[717]: no config URL provided Jul 2 07:49:51.180007 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:49:51.036901 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:49:51.194008 systemd[1]: Reached target local-fs.target. Jul 2 07:49:51.036917 ignition[717]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:49:51.208006 systemd[1]: Reached target sysinit.target. Jul 2 07:49:51.036973 ignition[717]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jul 2 07:49:51.220997 systemd[1]: Reached target basic.target. Jul 2 07:49:51.048400 ignition[717]: GET result: OK Jul 2 07:49:51.237140 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:49:51.048472 ignition[717]: parsing config with SHA512: 2b241be83368db3ac170c7ff1811feb1ed1579892595932346643590fcf226e0950ed3bd6daccc0cd9f8d492de1745b185c80b05c6544c31e29864308ef886a1 Jul 2 07:49:51.051991 ignition[717]: fetch: fetch complete Jul 2 07:49:51.052001 ignition[717]: fetch: fetch passed Jul 2 07:49:51.052150 ignition[717]: Ignition finished successfully Jul 2 07:49:51.082181 ignition[723]: Ignition 2.14.0 Jul 2 07:49:51.082191 ignition[723]: Stage: kargs Jul 2 07:49:51.082317 ignition[723]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:51.082372 ignition[723]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:51.089113 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:51.090167 ignition[723]: kargs: kargs passed Jul 2 07:49:51.090210 ignition[723]: Ignition finished successfully Jul 2 07:49:51.130337 ignition[729]: Ignition 2.14.0 Jul 2 07:49:51.130347 ignition[729]: Stage: disks Jul 2 07:49:51.130479 ignition[729]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:51.130510 ignition[729]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:51.137588 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:51.138712 ignition[729]: disks: disks passed Jul 2 07:49:51.138754 ignition[729]: Ignition finished successfully Jul 2 07:49:51.262090 systemd-fsck[737]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 07:49:51.465792 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:49:51.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:51.475094 systemd[1]: Mounting sysroot.mount... Jul 2 07:49:51.504157 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:49:51.500092 systemd[1]: Mounted sysroot.mount. Jul 2 07:49:51.511161 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:49:51.529731 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:49:51.535078 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:49:51.535129 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:49:51.535159 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:49:51.565417 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:49:51.591166 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:49:51.636172 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (743) Jul 2 07:49:51.636210 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:49:51.636226 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:49:51.636239 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:49:51.644016 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:49:51.665052 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:49:51.662532 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:49:51.673151 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:49:51.683010 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:49:51.692965 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:49:51.709963 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:49:51.739060 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:49:51.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:51.740254 systemd[1]: Starting ignition-mount.service... Jul 2 07:49:51.761963 systemd[1]: Starting sysroot-boot.service... Jul 2 07:49:51.776313 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 07:49:51.776449 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 07:49:51.802143 ignition[809]: INFO : Ignition 2.14.0 Jul 2 07:49:51.802143 ignition[809]: INFO : Stage: mount Jul 2 07:49:51.802143 ignition[809]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:51.802143 ignition[809]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:51.802143 ignition[809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:51.916076 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (818) Jul 2 07:49:51.916115 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:49:51.916139 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:49:51.916161 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:49:51.916179 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:49:51.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:51.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:51.801067 systemd[1]: Finished sysroot-boot.service. Jul 2 07:49:51.931152 ignition[809]: INFO : mount: mount passed Jul 2 07:49:51.931152 ignition[809]: INFO : Ignition finished successfully Jul 2 07:49:51.810257 systemd[1]: Finished ignition-mount.service. Jul 2 07:49:51.826031 systemd[1]: Starting ignition-files.service... Jul 2 07:49:51.830044 systemd-networkd[693]: eth0: Gained IPv6LL Jul 2 07:49:51.968001 ignition[837]: INFO : Ignition 2.14.0 Jul 2 07:49:51.968001 ignition[837]: INFO : Stage: files Jul 2 07:49:51.968001 ignition[837]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:51.968001 ignition[837]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:51.968001 ignition[837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:51.968001 ignition[837]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:49:51.968001 ignition[837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:49:51.968001 ignition[837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:49:51.968001 ignition[837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:49:51.968001 ignition[837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:49:51.968001 ignition[837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:49:51.968001 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Jul 2 07:49:51.968001 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:49:52.136985 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (837) Jul 2 07:49:51.846847 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:49:52.144993 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3216308624" Jul 2 07:49:52.144993 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3216308624": device or resource busy Jul 2 07:49:52.144993 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3216308624", trying btrfs: device or resource busy Jul 2 07:49:52.144993 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3216308624" Jul 2 07:49:52.144993 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3216308624" Jul 2 07:49:52.144993 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem3216308624" Jul 2 07:49:52.144993 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem3216308624" Jul 2 07:49:52.144993 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Jul 2 07:49:52.144993 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Jul 2 07:49:52.144993 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:49:52.144993 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem221059799" Jul 2 07:49:52.144993 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(7): op(8): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem221059799": device or resource busy Jul 2 07:49:52.144993 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(7): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem221059799", trying btrfs: device or resource busy Jul 2 07:49:52.144993 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem221059799" Jul 2 07:49:51.903783 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem221059799" Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [started] unmounting "/mnt/oem221059799" Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [finished] unmounting "/mnt/oem221059799" Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:49:52.390045 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4091824140" Jul 2 07:49:52.390045 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4091824140": device or resource busy Jul 2 07:49:51.960608 unknown[837]: wrote ssh authorized keys file for user: core Jul 2 07:49:52.634982 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4091824140", trying btrfs: device or resource busy Jul 2 07:49:52.634982 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4091824140" Jul 2 07:49:52.634982 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4091824140" Jul 2 07:49:52.634982 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem4091824140" Jul 2 07:49:52.634982 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem4091824140" Jul 2 07:49:52.634982 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Jul 2 07:49:52.634982 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Jul 2 07:49:52.634982 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:49:52.634982 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1878066073" Jul 2 07:49:52.634982 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(12): op(13): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1878066073": device or resource busy Jul 2 07:49:52.634982 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(12): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1878066073", trying btrfs: device or resource busy Jul 2 07:49:52.634982 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1878066073" Jul 2 07:49:52.634982 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1878066073" Jul 2 07:49:52.634982 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [started] unmounting "/mnt/oem1878066073" Jul 2 07:49:52.997159 kernel: kauditd_printk_skb: 26 callbacks suppressed Jul 2 07:49:52.997211 kernel: audit: type=1130 audit(1719906592.770:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.997228 kernel: audit: type=1130 audit(1719906592.867:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.997244 kernel: audit: type=1130 audit(1719906592.912:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.997259 kernel: audit: type=1131 audit(1719906592.912:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.762439 systemd[1]: Finished ignition-files.service. Jul 2 07:49:53.012073 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [finished] unmounting "/mnt/oem1878066073" Jul 2 07:49:53.012073 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Jul 2 07:49:53.012073 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:49:53.012073 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(16): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 07:49:53.012073 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(16): GET result: OK Jul 2 07:49:53.012073 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(17): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(17): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(18): [started] processing unit "oem-gce.service" Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(18): [finished] processing unit "oem-gce.service" Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(19): [started] processing unit "oem-gce-enable-oslogin.service" Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(19): [finished] processing unit "oem-gce-enable-oslogin.service" Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(1b): [started] setting preset to enabled for "oem-gce.service" Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(1b): [finished] setting preset to enabled for "oem-gce.service" Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(1c): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Jul 2 07:49:53.012073 ignition[837]: INFO : files: op(1c): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Jul 2 07:49:53.012073 ignition[837]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:49:53.442028 kernel: audit: type=1130 audit(1719906593.025:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.442079 kernel: audit: type=1131 audit(1719906593.025:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.442102 kernel: audit: type=1130 audit(1719906593.185:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.442125 kernel: audit: type=1131 audit(1719906593.310:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.780998 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:49:53.460170 ignition[837]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:49:53.460170 ignition[837]: INFO : files: files passed Jul 2 07:49:53.460170 ignition[837]: INFO : Ignition finished successfully Jul 2 07:49:52.819169 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:49:53.510170 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:49:52.820304 systemd[1]: Starting ignition-quench.service... Jul 2 07:49:52.844494 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:49:53.596029 kernel: audit: type=1131 audit(1719906593.566:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.869553 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:49:52.869684 systemd[1]: Finished ignition-quench.service. Jul 2 07:49:53.647022 kernel: audit: type=1131 audit(1719906593.618:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.914382 systemd[1]: Reached target ignition-complete.target. Jul 2 07:49:53.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:52.977197 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:49:53.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.017360 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:49:53.017491 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:49:53.705023 ignition[875]: INFO : Ignition 2.14.0 Jul 2 07:49:53.705023 ignition[875]: INFO : Stage: umount Jul 2 07:49:53.705023 ignition[875]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:53.705023 ignition[875]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:53.705023 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:53.705023 ignition[875]: INFO : umount: umount passed Jul 2 07:49:53.705023 ignition[875]: INFO : Ignition finished successfully Jul 2 07:49:53.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.823301 iscsid[703]: iscsid shutting down. Jul 2 07:49:53.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.027354 systemd[1]: Reached target initrd-fs.target. Jul 2 07:49:53.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.091215 systemd[1]: Reached target initrd.target. Jul 2 07:49:53.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.140216 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:49:53.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.141396 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:49:53.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.169463 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:49:53.188498 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:49:53.233928 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:49:53.240234 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:49:53.257293 systemd[1]: Stopped target timers.target. Jul 2 07:49:53.275313 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:49:53.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.275494 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:49:53.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.312559 systemd[1]: Stopped target initrd.target. Jul 2 07:49:53.346359 systemd[1]: Stopped target basic.target. Jul 2 07:49:54.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.364337 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:49:54.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.382301 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:49:53.401317 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:49:53.420306 systemd[1]: Stopped target remote-fs.target. Jul 2 07:49:53.450261 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:49:53.468265 systemd[1]: Stopped target sysinit.target. Jul 2 07:49:54.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.483308 systemd[1]: Stopped target local-fs.target. Jul 2 07:49:54.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.115000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:49:53.496298 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:49:53.532303 systemd[1]: Stopped target swap.target. Jul 2 07:49:53.553267 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:49:53.553452 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:49:54.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.568453 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:49:54.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.604206 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:49:54.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.604390 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:49:53.620378 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:49:54.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.620613 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:49:53.657274 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:49:53.657434 systemd[1]: Stopped ignition-files.service. Jul 2 07:49:54.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.674521 systemd[1]: Stopping ignition-mount.service... Jul 2 07:49:54.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.698302 systemd[1]: Stopping iscsid.service... Jul 2 07:49:54.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.711964 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:49:53.712269 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:49:54.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.720603 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:49:54.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.732146 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:49:54.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:53.732380 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:49:53.757256 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:49:53.757421 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:49:53.787358 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:49:53.788416 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:49:54.446002 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Jul 2 07:49:53.788525 systemd[1]: Stopped iscsid.service. Jul 2 07:49:53.802750 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:49:53.802899 systemd[1]: Stopped ignition-mount.service. Jul 2 07:49:53.809681 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:49:53.809784 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:49:53.831740 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:49:53.831913 systemd[1]: Stopped ignition-disks.service. Jul 2 07:49:53.846037 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:49:53.846107 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:49:53.861057 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 07:49:53.861136 systemd[1]: Stopped ignition-fetch.service. Jul 2 07:49:53.876051 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:49:53.876127 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:49:53.895139 systemd[1]: Stopped target paths.target. Jul 2 07:49:53.909987 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:49:53.911957 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:49:53.924980 systemd[1]: Stopped target slices.target. Jul 2 07:49:53.937993 systemd[1]: Stopped target sockets.target. Jul 2 07:49:53.951034 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:49:53.951099 systemd[1]: Closed iscsid.socket. Jul 2 07:49:53.966007 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:49:53.966086 systemd[1]: Stopped ignition-setup.service. Jul 2 07:49:53.981071 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:49:53.981154 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:49:53.997163 systemd[1]: Stopping iscsiuio.service... Jul 2 07:49:54.011456 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:49:54.011562 systemd[1]: Stopped iscsiuio.service. Jul 2 07:49:54.018450 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:49:54.018548 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:49:54.039027 systemd[1]: Stopped target network.target. Jul 2 07:49:54.054076 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:49:54.054129 systemd[1]: Closed iscsiuio.socket. Jul 2 07:49:54.062305 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:49:54.066929 systemd-networkd[693]: eth0: DHCPv6 lease lost Jul 2 07:49:54.075256 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:49:54.095321 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:49:54.095439 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:49:54.103778 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:49:54.103920 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:49:54.116997 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:49:54.117035 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:49:54.137970 systemd[1]: Stopping network-cleanup.service... Jul 2 07:49:54.151133 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:49:54.151198 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:49:54.173100 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:49:54.173168 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:49:54.188219 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:49:54.188273 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:49:54.203193 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:49:54.219579 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:49:54.220212 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:49:54.220360 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:49:54.226379 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:49:54.226568 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:49:54.250133 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:49:54.250177 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:49:54.266086 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:49:54.266153 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:49:54.274208 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:49:54.274259 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:49:54.295142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:49:54.295199 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:49:54.303234 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:49:54.323006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:49:54.323101 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:49:54.341625 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:49:54.341744 systemd[1]: Stopped network-cleanup.service. Jul 2 07:49:54.356377 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:49:54.356477 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:49:54.371339 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:49:54.390045 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:49:54.412965 systemd[1]: Switching root. Jul 2 07:49:54.456270 systemd-journald[189]: Journal stopped Jul 2 07:49:58.977590 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:49:58.977709 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:49:58.977736 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:49:58.977759 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:49:58.977791 kernel: SELinux: policy capability open_perms=1 Jul 2 07:49:58.977815 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:49:58.977839 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:49:58.977863 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:49:58.977918 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:49:58.977952 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:49:58.977981 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:49:58.978014 systemd[1]: Successfully loaded SELinux policy in 107.490ms. Jul 2 07:49:58.978057 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.303ms. Jul 2 07:49:58.978083 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:49:58.978110 systemd[1]: Detected virtualization kvm. Jul 2 07:49:58.978135 systemd[1]: Detected architecture x86-64. Jul 2 07:49:58.978159 systemd[1]: Detected first boot. Jul 2 07:49:58.978190 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:49:58.978215 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:49:58.978240 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:49:58.978267 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:49:58.978295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:49:58.978322 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:49:58.978353 kernel: kauditd_printk_skb: 48 callbacks suppressed Jul 2 07:49:58.978381 kernel: audit: type=1334 audit(1719906598.105:88): prog-id=12 op=LOAD Jul 2 07:49:58.978404 kernel: audit: type=1334 audit(1719906598.105:89): prog-id=3 op=UNLOAD Jul 2 07:49:58.978441 kernel: audit: type=1334 audit(1719906598.110:90): prog-id=13 op=LOAD Jul 2 07:49:58.978463 kernel: audit: type=1334 audit(1719906598.124:91): prog-id=14 op=LOAD Jul 2 07:49:58.978486 kernel: audit: type=1334 audit(1719906598.124:92): prog-id=4 op=UNLOAD Jul 2 07:49:58.978508 kernel: audit: type=1334 audit(1719906598.124:93): prog-id=5 op=UNLOAD Jul 2 07:49:58.978530 kernel: audit: type=1334 audit(1719906598.138:94): prog-id=15 op=LOAD Jul 2 07:49:58.978552 kernel: audit: type=1334 audit(1719906598.138:95): prog-id=12 op=UNLOAD Jul 2 07:49:58.978574 kernel: audit: type=1334 audit(1719906598.145:96): prog-id=16 op=LOAD Jul 2 07:49:58.978602 kernel: audit: type=1334 audit(1719906598.152:97): prog-id=17 op=LOAD Jul 2 07:49:58.978625 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:49:58.978650 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:49:58.978674 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:49:58.978700 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:49:58.978724 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:49:58.978754 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 07:49:58.978778 systemd[1]: Created slice system-getty.slice. Jul 2 07:49:58.978815 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:49:58.978831 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:49:58.978846 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:49:58.978861 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:49:58.978901 systemd[1]: Created slice user.slice. Jul 2 07:49:58.978928 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:49:58.978943 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:49:58.978958 systemd[1]: Set up automount boot.automount. Jul 2 07:49:58.978976 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:49:58.978991 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:49:58.979006 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:49:58.979021 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:49:58.979036 systemd[1]: Reached target integritysetup.target. Jul 2 07:49:58.979052 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:49:58.979067 systemd[1]: Reached target remote-fs.target. Jul 2 07:49:58.979082 systemd[1]: Reached target slices.target. Jul 2 07:49:58.979097 systemd[1]: Reached target swap.target. Jul 2 07:49:58.979115 systemd[1]: Reached target torcx.target. Jul 2 07:49:58.979130 systemd[1]: Reached target veritysetup.target. Jul 2 07:49:58.979145 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:49:58.979159 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:49:58.979175 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:49:58.979189 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:49:58.979204 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:49:58.979219 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:49:58.979234 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:49:58.979248 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:49:58.979266 systemd[1]: Mounting media.mount... Jul 2 07:49:58.979281 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:58.979296 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:49:58.979311 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:49:58.979326 systemd[1]: Mounting tmp.mount... Jul 2 07:49:58.979340 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:49:58.979355 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:49:58.979372 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:49:58.979386 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:49:58.979404 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:49:58.979419 systemd[1]: Starting modprobe@drm.service... Jul 2 07:49:58.979433 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:49:58.979448 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:49:58.979462 systemd[1]: Starting modprobe@loop.service... Jul 2 07:49:58.979477 kernel: fuse: init (API version 7.34) Jul 2 07:49:58.979492 kernel: loop: module loaded Jul 2 07:49:58.979507 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:49:58.979523 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:49:58.979543 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:49:58.979558 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:49:58.979572 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:49:58.979587 systemd[1]: Stopped systemd-journald.service. Jul 2 07:49:58.979602 systemd[1]: Starting systemd-journald.service... Jul 2 07:49:58.979617 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:49:58.979632 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:49:58.979652 systemd-journald[998]: Journal started Jul 2 07:49:58.979718 systemd-journald[998]: Runtime Journal (/run/log/journal/e7d5c4a3470cd7ef58c7b9ab4f098c3e) is 8.0M, max 148.8M, 140.8M free. Jul 2 07:49:54.456000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:49:54.728000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:49:54.877000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:49:54.878000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:49:54.878000 audit: BPF prog-id=10 op=LOAD Jul 2 07:49:54.878000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:49:54.878000 audit: BPF prog-id=11 op=LOAD Jul 2 07:49:54.878000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:49:55.042000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:49:55.042000 audit[908]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:49:55.042000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:49:55.053000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:49:55.053000 audit[908]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b9 a2=1ed a3=0 items=2 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:49:55.053000 audit: CWD cwd="/" Jul 2 07:49:55.053000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:55.053000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:49:55.053000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:49:58.105000 audit: BPF prog-id=12 op=LOAD Jul 2 07:49:58.105000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:49:58.110000 audit: BPF prog-id=13 op=LOAD Jul 2 07:49:58.124000 audit: BPF prog-id=14 op=LOAD Jul 2 07:49:58.124000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:49:58.124000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:49:58.138000 audit: BPF prog-id=15 op=LOAD Jul 2 07:49:58.138000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:49:58.145000 audit: BPF prog-id=16 op=LOAD Jul 2 07:49:58.152000 audit: BPF prog-id=17 op=LOAD Jul 2 07:49:58.152000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:49:58.152000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:49:58.159000 audit: BPF prog-id=18 op=LOAD Jul 2 07:49:58.159000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:49:58.180000 audit: BPF prog-id=19 op=LOAD Jul 2 07:49:58.180000 audit: BPF prog-id=20 op=LOAD Jul 2 07:49:58.180000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:49:58.180000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:49:58.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:58.197000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:49:58.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:58.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:58.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:58.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:58.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:58.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:58.929000 audit: BPF prog-id=21 op=LOAD Jul 2 07:49:58.929000 audit: BPF prog-id=22 op=LOAD Jul 2 07:49:58.929000 audit: BPF prog-id=23 op=LOAD Jul 2 07:49:58.929000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:49:58.929000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:49:58.972000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:49:58.972000 audit[998]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd7389a930 a2=4000 a3=7ffd7389a9cc items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:49:58.972000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:49:58.105440 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:49:55.039227 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:49:58.183257 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:49:55.040266 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:49:55.040292 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:49:55.040331 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:49:55.040343 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:49:55.040383 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:49:55.040398 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:49:55.040625 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:49:55.040680 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:49:55.040696 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:49:55.043085 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:49:55.043137 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:49:55.043161 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:49:55.043179 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:49:55.043205 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:49:55.043222 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:49:57.511181 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:57Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:49:57.511474 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:57Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:49:57.511614 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:57Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:49:57.511841 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:57Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:49:57.511942 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:57Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:49:57.512020 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:49:57Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:49:58.987910 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:49:59.002911 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:49:59.015892 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:49:59.021907 systemd[1]: Stopped verity-setup.service. Jul 2 07:49:59.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.040887 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:49:59.049903 systemd[1]: Started systemd-journald.service. Jul 2 07:49:59.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.059251 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:49:59.066141 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:49:59.073145 systemd[1]: Mounted media.mount. Jul 2 07:49:59.080119 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:49:59.088116 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:49:59.097061 systemd[1]: Mounted tmp.mount. Jul 2 07:49:59.104205 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:49:59.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.113280 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:49:59.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.122276 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:49:59.122475 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:49:59.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.131370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:49:59.131586 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:49:59.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.140339 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:49:59.140542 systemd[1]: Finished modprobe@drm.service. Jul 2 07:49:59.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.149332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:49:59.149558 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:49:59.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.158308 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:49:59.158512 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:49:59.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.167315 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:49:59.167518 systemd[1]: Finished modprobe@loop.service. Jul 2 07:49:59.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.176322 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:49:59.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.185278 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:49:59.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.194332 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:49:59.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.203297 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:49:59.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.212661 systemd[1]: Reached target network-pre.target. Jul 2 07:49:59.222402 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:49:59.232276 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:49:59.238991 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:49:59.243785 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:49:59.252394 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:49:59.261019 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:49:59.262512 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:49:59.270027 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:49:59.271586 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:49:59.274977 systemd-journald[998]: Time spent on flushing to /var/log/journal/e7d5c4a3470cd7ef58c7b9ab4f098c3e is 64.244ms for 1141 entries. Jul 2 07:49:59.274977 systemd-journald[998]: System Journal (/var/log/journal/e7d5c4a3470cd7ef58c7b9ab4f098c3e) is 8.0M, max 584.8M, 576.8M free. Jul 2 07:49:59.370108 systemd-journald[998]: Received client request to flush runtime journal. Jul 2 07:49:59.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.287302 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:49:59.295402 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:49:59.371589 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 07:49:59.306287 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:49:59.315094 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:49:59.324337 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:49:59.333394 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:49:59.345652 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:49:59.358709 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:49:59.371199 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:49:59.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.919225 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:49:59.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.926000 audit: BPF prog-id=24 op=LOAD Jul 2 07:49:59.926000 audit: BPF prog-id=25 op=LOAD Jul 2 07:49:59.926000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:49:59.926000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:49:59.929763 systemd[1]: Starting systemd-udevd.service... Jul 2 07:49:59.951205 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Jul 2 07:49:59.998011 systemd[1]: Started systemd-udevd.service. Jul 2 07:50:00.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.006000 audit: BPF prog-id=26 op=LOAD Jul 2 07:50:00.009166 systemd[1]: Starting systemd-networkd.service... Jul 2 07:50:00.023000 audit: BPF prog-id=27 op=LOAD Jul 2 07:50:00.023000 audit: BPF prog-id=28 op=LOAD Jul 2 07:50:00.024000 audit: BPF prog-id=29 op=LOAD Jul 2 07:50:00.026713 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:50:00.083600 systemd[1]: Started systemd-userdbd.service. Jul 2 07:50:00.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.092212 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:50:00.203901 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:50:00.208482 systemd-networkd[1028]: lo: Link UP Jul 2 07:50:00.208495 systemd-networkd[1028]: lo: Gained carrier Jul 2 07:50:00.209258 systemd-networkd[1028]: Enumeration completed Jul 2 07:50:00.209399 systemd[1]: Started systemd-networkd.service. Jul 2 07:50:00.211007 systemd-networkd[1028]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:50:00.215910 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:50:00.216648 systemd-networkd[1028]: eth0: Link UP Jul 2 07:50:00.216665 systemd-networkd[1028]: eth0: Gained carrier Jul 2 07:50:00.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.227021 systemd-networkd[1028]: eth0: DHCPv4 address 10.128.0.56/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 2 07:50:00.250915 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1042) Jul 2 07:50:00.260890 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jul 2 07:50:00.277904 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 07:50:00.295000 audit[1039]: AVC avc: denied { confidentiality } for pid=1039 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:50:00.326903 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:50:00.295000 audit[1039]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557777ff2f60 a1=3207c a2=7fe64e972bc5 a3=5 items=108 ppid=1015 pid=1039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:50:00.295000 audit: CWD cwd="/" Jul 2 07:50:00.295000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=1 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=2 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=3 name=(null) inode=14563 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=4 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=5 name=(null) inode=14564 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=6 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=7 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=8 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=9 name=(null) inode=14566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=10 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=11 name=(null) inode=14567 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=12 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=13 name=(null) inode=14568 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.333970 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:50:00.295000 audit: PATH item=14 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=15 name=(null) inode=14569 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=16 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=17 name=(null) inode=14570 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=18 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=19 name=(null) inode=14571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=20 name=(null) inode=14571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=21 name=(null) inode=14572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=22 name=(null) inode=14571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=23 name=(null) inode=14573 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=24 name=(null) inode=14571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=25 name=(null) inode=14574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=26 name=(null) inode=14571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=27 name=(null) inode=14575 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=28 name=(null) inode=14571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=29 name=(null) inode=14576 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=30 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=31 name=(null) inode=14577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=32 name=(null) inode=14577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=33 name=(null) inode=14578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=34 name=(null) inode=14577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=35 name=(null) inode=14579 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=36 name=(null) inode=14577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=37 name=(null) inode=14580 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=38 name=(null) inode=14577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=39 name=(null) inode=14581 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=40 name=(null) inode=14577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=41 name=(null) inode=14582 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=42 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=43 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=44 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=45 name=(null) inode=14584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=46 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=47 name=(null) inode=14585 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=48 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=49 name=(null) inode=14586 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=50 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=51 name=(null) inode=14587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=52 name=(null) inode=14583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=53 name=(null) inode=14588 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=55 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=56 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=57 name=(null) inode=14590 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=58 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=59 name=(null) inode=14591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=60 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=61 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=62 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=63 name=(null) inode=14593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=64 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=65 name=(null) inode=14594 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=66 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=67 name=(null) inode=14595 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=68 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=69 name=(null) inode=14596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=70 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=71 name=(null) inode=14597 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=72 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=73 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=74 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=75 name=(null) inode=14599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=76 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=77 name=(null) inode=14600 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=78 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=79 name=(null) inode=14601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=80 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=81 name=(null) inode=14602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=82 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=83 name=(null) inode=14603 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=84 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=85 name=(null) inode=14604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=86 name=(null) inode=14604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=87 name=(null) inode=14605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=88 name=(null) inode=14604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=89 name=(null) inode=14606 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=90 name=(null) inode=14604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=91 name=(null) inode=14607 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=92 name=(null) inode=14604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=93 name=(null) inode=14608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=94 name=(null) inode=14604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=95 name=(null) inode=14609 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=96 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=97 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=98 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=99 name=(null) inode=14611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=100 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=101 name=(null) inode=14612 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=102 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=103 name=(null) inode=14613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=104 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=105 name=(null) inode=14614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=106 name=(null) inode=14610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PATH item=107 name=(null) inode=14615 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:00.295000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:50:00.353888 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 2 07:50:00.377890 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 07:50:00.391891 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:50:00.410393 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:50:00.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.420529 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:50:00.448621 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:50:00.476065 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:50:00.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.484180 systemd[1]: Reached target cryptsetup.target. Jul 2 07:50:00.494415 systemd[1]: Starting lvm2-activation.service... Jul 2 07:50:00.500149 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:50:00.527158 systemd[1]: Finished lvm2-activation.service. Jul 2 07:50:00.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.537201 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:50:00.545988 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:50:00.546036 systemd[1]: Reached target local-fs.target. Jul 2 07:50:00.553982 systemd[1]: Reached target machines.target. Jul 2 07:50:00.563492 systemd[1]: Starting ldconfig.service... Jul 2 07:50:00.570986 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:50:00.571068 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:00.572714 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:50:00.581486 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:50:00.592728 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:50:00.595069 systemd[1]: Starting systemd-sysext.service... Jul 2 07:50:00.596671 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1055 (bootctl) Jul 2 07:50:00.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.599999 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:50:00.610288 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:50:00.627848 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:50:00.636385 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:50:00.636690 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:50:00.658894 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 07:50:00.737718 systemd-fsck[1064]: fsck.fat 4.2 (2021-01-31) Jul 2 07:50:00.737718 systemd-fsck[1064]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 07:50:00.741591 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:50:00.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.756018 systemd[1]: Mounting boot.mount... Jul 2 07:50:00.813903 systemd[1]: Mounted boot.mount. Jul 2 07:50:00.841048 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:50:00.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.997010 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:50:00.998127 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:50:01.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.031907 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:50:01.061911 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 07:50:01.085882 (sd-sysext)[1071]: Using extensions 'kubernetes'. Jul 2 07:50:01.086993 (sd-sysext)[1071]: Merged extensions into '/usr'. Jul 2 07:50:01.113142 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:01.115303 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:50:01.123204 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:50:01.125009 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:50:01.133580 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:50:01.142658 systemd[1]: Starting modprobe@loop.service... Jul 2 07:50:01.150077 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:50:01.150300 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:01.150489 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:01.155809 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:50:01.156972 ldconfig[1054]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:50:01.163509 systemd[1]: Finished ldconfig.service. Jul 2 07:50:01.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.171580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:50:01.171782 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:50:01.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.180608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:50:01.180812 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:50:01.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.189599 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:50:01.189811 systemd[1]: Finished modprobe@loop.service. Jul 2 07:50:01.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.199737 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:50:01.199898 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:50:01.201327 systemd[1]: Finished systemd-sysext.service. Jul 2 07:50:01.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.211603 systemd[1]: Starting ensure-sysext.service... Jul 2 07:50:01.220348 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:50:01.232493 systemd[1]: Reloading. Jul 2 07:50:01.250011 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:50:01.257881 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:50:01.273044 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:50:01.310858 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-07-02T07:50:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:50:01.325973 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-07-02T07:50:01Z" level=info msg="torcx already run" Jul 2 07:50:01.487398 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:50:01.487697 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:50:01.524312 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:50:01.599000 audit: BPF prog-id=30 op=LOAD Jul 2 07:50:01.600000 audit: BPF prog-id=26 op=UNLOAD Jul 2 07:50:01.602000 audit: BPF prog-id=31 op=LOAD Jul 2 07:50:01.602000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:50:01.602000 audit: BPF prog-id=32 op=LOAD Jul 2 07:50:01.602000 audit: BPF prog-id=33 op=LOAD Jul 2 07:50:01.602000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:50:01.603000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:50:01.603000 audit: BPF prog-id=34 op=LOAD Jul 2 07:50:01.603000 audit: BPF prog-id=27 op=UNLOAD Jul 2 07:50:01.603000 audit: BPF prog-id=35 op=LOAD Jul 2 07:50:01.603000 audit: BPF prog-id=36 op=LOAD Jul 2 07:50:01.603000 audit: BPF prog-id=28 op=UNLOAD Jul 2 07:50:01.603000 audit: BPF prog-id=29 op=UNLOAD Jul 2 07:50:01.604000 audit: BPF prog-id=37 op=LOAD Jul 2 07:50:01.604000 audit: BPF prog-id=38 op=LOAD Jul 2 07:50:01.604000 audit: BPF prog-id=24 op=UNLOAD Jul 2 07:50:01.604000 audit: BPF prog-id=25 op=UNLOAD Jul 2 07:50:01.610988 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:50:01.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.625075 systemd[1]: Starting audit-rules.service... Jul 2 07:50:01.633400 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:50:01.643738 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:50:01.653556 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:50:01.661000 audit: BPF prog-id=39 op=LOAD Jul 2 07:50:01.664438 systemd[1]: Starting systemd-resolved.service... Jul 2 07:50:01.673503 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:50:01.670000 audit: BPF prog-id=40 op=LOAD Jul 2 07:50:01.682800 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:50:01.691000 audit[1168]: SYSTEM_BOOT pid=1168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.693637 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:50:01.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:01.702372 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:50:01.702586 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:50:01.705681 augenrules[1172]: No rules Jul 2 07:50:01.703000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:50:01.703000 audit[1172]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd901a2920 a2=420 a3=0 items=0 ppid=1142 pid=1172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:50:01.703000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:50:01.711444 systemd[1]: Finished audit-rules.service. Jul 2 07:50:01.718300 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:50:01.735173 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:01.735741 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:50:01.738669 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:50:01.749094 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:50:01.759082 systemd[1]: Starting modprobe@loop.service... Jul 2 07:50:01.768828 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:50:01.773508 enable-oslogin[1180]: /etc/pam.d/sshd already exists. Not enabling OS Login Jul 2 07:50:01.775556 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:50:01.775938 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:01.778446 systemd[1]: Starting systemd-update-done.service... Jul 2 07:50:01.784984 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:50:01.785294 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:01.789042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:50:01.789246 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:50:01.798904 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:50:01.799108 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:50:01.807848 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:50:01.808066 systemd[1]: Finished modprobe@loop.service. Jul 2 07:50:01.816906 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:50:01.817150 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:50:01.827831 systemd[1]: Finished systemd-update-done.service. Jul 2 07:50:01.837278 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:50:01.837590 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:50:01.842271 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:01.842809 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:50:01.847447 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:50:01.855833 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:50:01.860079 systemd-resolved[1159]: Positive Trust Anchors: Jul 2 07:50:01.860100 systemd-resolved[1159]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:50:01.860710 systemd-resolved[1159]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:50:01.864828 systemd[1]: Starting modprobe@loop.service... Jul 2 07:50:01.873838 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:50:01.879067 enable-oslogin[1185]: /etc/pam.d/sshd already exists. Not enabling OS Login Jul 2 07:50:01.880456 systemd-resolved[1159]: Defaulting to hostname 'linux'. Jul 2 07:50:01.883044 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:50:01.883268 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:01.883453 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:50:01.883607 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:01.886057 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:50:01.894399 systemd[1]: Started systemd-resolved.service. Jul 2 07:50:01.903571 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:50:01.903764 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:50:01.912707 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:50:01.912920 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:50:01.917806 systemd-timesyncd[1164]: Contacted time server 169.254.169.254:123 (169.254.169.254). Jul 2 07:50:01.917905 systemd-timesyncd[1164]: Initial clock synchronization to Tue 2024-07-02 07:50:01.905673 UTC. Jul 2 07:50:01.921307 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:50:01.930706 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:50:01.930846 systemd[1]: Finished modprobe@loop.service. Jul 2 07:50:01.939410 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:50:01.939578 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:50:01.949536 systemd[1]: Reached target network.target. Jul 2 07:50:01.958100 systemd[1]: Reached target nss-lookup.target. Jul 2 07:50:01.966100 systemd[1]: Reached target time-set.target. Jul 2 07:50:01.974064 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:50:01.974303 systemd[1]: Reached target sysinit.target. Jul 2 07:50:01.983268 systemd[1]: Started motdgen.path. Jul 2 07:50:01.990234 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:50:02.000389 systemd[1]: Started logrotate.timer. Jul 2 07:50:02.007329 systemd[1]: Started mdadm.timer. Jul 2 07:50:02.014175 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:50:02.023085 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:50:02.023276 systemd[1]: Reached target paths.target. Jul 2 07:50:02.030132 systemd[1]: Reached target timers.target. Jul 2 07:50:02.037631 systemd[1]: Listening on dbus.socket. Jul 2 07:50:02.046676 systemd[1]: Starting docker.socket... Jul 2 07:50:02.057783 systemd[1]: Listening on sshd.socket. Jul 2 07:50:02.065233 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:02.065545 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:50:02.068369 systemd[1]: Listening on docker.socket. Jul 2 07:50:02.071021 systemd-networkd[1028]: eth0: Gained IPv6LL Jul 2 07:50:02.077727 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:50:02.077946 systemd[1]: Reached target sockets.target. Jul 2 07:50:02.086141 systemd[1]: Reached target basic.target. Jul 2 07:50:02.093102 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:50:02.093288 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:50:02.095224 systemd[1]: Starting containerd.service... Jul 2 07:50:02.103803 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 07:50:02.114190 systemd[1]: Starting dbus.service... Jul 2 07:50:02.123709 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:50:02.133131 systemd[1]: Starting extend-filesystems.service... Jul 2 07:50:02.140007 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:50:02.140606 jq[1192]: false Jul 2 07:50:02.142358 systemd[1]: Starting modprobe@drm.service... Jul 2 07:50:02.151646 systemd[1]: Starting motdgen.service... Jul 2 07:50:02.161237 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:50:02.169946 systemd[1]: Starting sshd-keygen.service... Jul 2 07:50:02.179014 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:50:02.186990 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:02.187264 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jul 2 07:50:02.188050 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:50:02.189469 systemd[1]: Starting update-engine.service... Jul 2 07:50:02.198975 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:50:02.205383 jq[1213]: true Jul 2 07:50:02.213728 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:50:02.215580 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:50:02.216513 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:50:02.216726 systemd[1]: Finished modprobe@drm.service. Jul 2 07:50:02.224539 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:50:02.224782 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:50:02.233929 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:50:02.237612 extend-filesystems[1193]: Found loop1 Jul 2 07:50:02.262095 dbus-daemon[1191]: [system] SELinux support is enabled Jul 2 07:50:02.244731 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:50:02.338982 extend-filesystems[1193]: Found sda Jul 2 07:50:02.338982 extend-filesystems[1193]: Found sda1 Jul 2 07:50:02.338982 extend-filesystems[1193]: Found sda2 Jul 2 07:50:02.338982 extend-filesystems[1193]: Found sda3 Jul 2 07:50:02.338982 extend-filesystems[1193]: Found usr Jul 2 07:50:02.338982 extend-filesystems[1193]: Found sda4 Jul 2 07:50:02.338982 extend-filesystems[1193]: Found sda6 Jul 2 07:50:02.338982 extend-filesystems[1193]: Found sda7 Jul 2 07:50:02.338982 extend-filesystems[1193]: Found sda9 Jul 2 07:50:02.338982 extend-filesystems[1193]: Checking size of /dev/sda9 Jul 2 07:50:02.338982 extend-filesystems[1193]: Resized partition /dev/sda9 Jul 2 07:50:02.495501 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jul 2 07:50:02.495571 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jul 2 07:50:02.495607 kernel: loop2: detected capacity change from 0 to 2097152 Jul 2 07:50:02.495641 jq[1218]: true Jul 2 07:50:02.272448 dbus-daemon[1191]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1028 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 07:50:02.245000 systemd[1]: Finished motdgen.service. Jul 2 07:50:02.496122 extend-filesystems[1230]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:50:02.496122 extend-filesystems[1230]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 2 07:50:02.496122 extend-filesystems[1230]: old_desc_blocks = 1, new_desc_blocks = 2 Jul 2 07:50:02.496122 extend-filesystems[1230]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jul 2 07:50:02.565027 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:50:02.565086 update_engine[1212]: I0702 07:50:02.347214 1212 main.cc:92] Flatcar Update Engine starting Jul 2 07:50:02.565086 update_engine[1212]: I0702 07:50:02.355264 1212 update_check_scheduler.cc:74] Next update check in 7m23s Jul 2 07:50:02.565501 env[1219]: time="2024-07-02T07:50:02.509284474Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:50:02.339468 dbus-daemon[1191]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 07:50:02.258490 systemd[1]: Reached target network-online.target. Jul 2 07:50:02.566109 extend-filesystems[1193]: Resized filesystem in /dev/sda9 Jul 2 07:50:02.270399 systemd[1]: Starting kubelet.service... Jul 2 07:50:02.278798 systemd[1]: Starting oem-gce.service... Jul 2 07:50:02.287934 systemd[1]: Starting systemd-logind.service... Jul 2 07:50:02.295499 systemd[1]: Started dbus.service. Jul 2 07:50:02.575169 bash[1254]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:50:02.307481 systemd[1]: Finished ensure-sysext.service. Jul 2 07:50:02.324742 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:50:02.575692 mkfs.ext4[1234]: mke2fs 1.46.5 (30-Dec-2021) Jul 2 07:50:02.575692 mkfs.ext4[1234]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Jul 2 07:50:02.575692 mkfs.ext4[1234]: Creating filesystem with 262144 4k blocks and 65536 inodes Jul 2 07:50:02.575692 mkfs.ext4[1234]: Filesystem UUID: d83fd911-11fe-45e1-a348-0e0de3f23dbb Jul 2 07:50:02.575692 mkfs.ext4[1234]: Superblock backups stored on blocks: Jul 2 07:50:02.575692 mkfs.ext4[1234]: 32768, 98304, 163840, 229376 Jul 2 07:50:02.575692 mkfs.ext4[1234]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:50:02.575692 mkfs.ext4[1234]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:50:02.575692 mkfs.ext4[1234]: Creating journal (8192 blocks): done Jul 2 07:50:02.575692 mkfs.ext4[1234]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:50:02.325070 systemd[1]: Reached target system-config.target. Jul 2 07:50:02.344413 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:50:02.344465 systemd[1]: Reached target user-config.target. Jul 2 07:50:02.363766 systemd[1]: Started update-engine.service. Jul 2 07:50:02.382768 systemd[1]: Started locksmithd.service. Jul 2 07:50:02.577304 umount[1253]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Jul 2 07:50:02.393408 systemd[1]: Starting systemd-hostnamed.service... Jul 2 07:50:02.483687 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:50:02.484062 systemd[1]: Finished extend-filesystems.service. Jul 2 07:50:02.504664 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:50:02.603111 coreos-metadata[1190]: Jul 02 07:50:02.602 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jul 2 07:50:02.606425 coreos-metadata[1190]: Jul 02 07:50:02.606 INFO Fetch failed with 404: resource not found Jul 2 07:50:02.606425 coreos-metadata[1190]: Jul 02 07:50:02.606 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jul 2 07:50:02.607315 coreos-metadata[1190]: Jul 02 07:50:02.607 INFO Fetch successful Jul 2 07:50:02.607315 coreos-metadata[1190]: Jul 02 07:50:02.607 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jul 2 07:50:02.608056 coreos-metadata[1190]: Jul 02 07:50:02.607 INFO Fetch failed with 404: resource not found Jul 2 07:50:02.608056 coreos-metadata[1190]: Jul 02 07:50:02.607 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jul 2 07:50:02.610068 coreos-metadata[1190]: Jul 02 07:50:02.609 INFO Fetch failed with 404: resource not found Jul 2 07:50:02.610068 coreos-metadata[1190]: Jul 02 07:50:02.609 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jul 2 07:50:02.611503 coreos-metadata[1190]: Jul 02 07:50:02.611 INFO Fetch successful Jul 2 07:50:02.613806 unknown[1190]: wrote ssh authorized keys file for user: core Jul 2 07:50:02.631474 systemd-logind[1226]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:50:02.631997 systemd-logind[1226]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 07:50:02.632156 systemd-logind[1226]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:50:02.641153 systemd-logind[1226]: New seat seat0. Jul 2 07:50:02.646229 systemd[1]: Started systemd-logind.service. Jul 2 07:50:02.651923 update-ssh-keys[1265]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:50:02.655757 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 07:50:02.665844 env[1219]: time="2024-07-02T07:50:02.665746265Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:50:02.668960 env[1219]: time="2024-07-02T07:50:02.668923432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:02.676272 env[1219]: time="2024-07-02T07:50:02.676217036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:50:02.676984 env[1219]: time="2024-07-02T07:50:02.676948738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:02.677534 env[1219]: time="2024-07-02T07:50:02.677487232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:50:02.682566 env[1219]: time="2024-07-02T07:50:02.682512896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:02.682761 env[1219]: time="2024-07-02T07:50:02.682725432Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:50:02.682932 env[1219]: time="2024-07-02T07:50:02.682907559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:02.683195 env[1219]: time="2024-07-02T07:50:02.683167404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:02.683908 env[1219]: time="2024-07-02T07:50:02.683850491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:02.698577 env[1219]: time="2024-07-02T07:50:02.698511637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:50:02.699966 env[1219]: time="2024-07-02T07:50:02.699926023Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:50:02.701052 env[1219]: time="2024-07-02T07:50:02.701000926Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:50:02.701223 env[1219]: time="2024-07-02T07:50:02.701200482Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:50:02.717467 env[1219]: time="2024-07-02T07:50:02.717417287Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:50:02.717633 env[1219]: time="2024-07-02T07:50:02.717608656Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:50:02.717769 env[1219]: time="2024-07-02T07:50:02.717746327Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:50:02.717967 env[1219]: time="2024-07-02T07:50:02.717944721Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:50:02.718160 env[1219]: time="2024-07-02T07:50:02.718139050Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:50:02.718294 env[1219]: time="2024-07-02T07:50:02.718273663Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:50:02.719530 env[1219]: time="2024-07-02T07:50:02.719496283Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:50:02.719693 env[1219]: time="2024-07-02T07:50:02.719670900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:50:02.719820 env[1219]: time="2024-07-02T07:50:02.719799368Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:50:02.719962 env[1219]: time="2024-07-02T07:50:02.719940402Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:50:02.720089 env[1219]: time="2024-07-02T07:50:02.720068951Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:50:02.720225 env[1219]: time="2024-07-02T07:50:02.720205516Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:50:02.720515 env[1219]: time="2024-07-02T07:50:02.720493353Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:50:02.720798 env[1219]: time="2024-07-02T07:50:02.720776101Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:50:02.721465 env[1219]: time="2024-07-02T07:50:02.721438970Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:50:02.723058 env[1219]: time="2024-07-02T07:50:02.723026116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.723203 env[1219]: time="2024-07-02T07:50:02.723181382Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:50:02.723389 env[1219]: time="2024-07-02T07:50:02.723369208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.723578 env[1219]: time="2024-07-02T07:50:02.723557376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.723822 env[1219]: time="2024-07-02T07:50:02.723798974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.723958 env[1219]: time="2024-07-02T07:50:02.723938620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.724943 env[1219]: time="2024-07-02T07:50:02.724914735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.725076 env[1219]: time="2024-07-02T07:50:02.725054229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.725174 env[1219]: time="2024-07-02T07:50:02.725155905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.725275 env[1219]: time="2024-07-02T07:50:02.725257272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.725381 env[1219]: time="2024-07-02T07:50:02.725362016Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:50:02.725761 env[1219]: time="2024-07-02T07:50:02.725731517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.726504 env[1219]: time="2024-07-02T07:50:02.726473513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.726697 env[1219]: time="2024-07-02T07:50:02.726670723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.726819 env[1219]: time="2024-07-02T07:50:02.726796098Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:50:02.726976 env[1219]: time="2024-07-02T07:50:02.726940912Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:50:02.728929 env[1219]: time="2024-07-02T07:50:02.728895903Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:50:02.731219 env[1219]: time="2024-07-02T07:50:02.731186160Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:50:02.733215 env[1219]: time="2024-07-02T07:50:02.733181372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:50:02.733711 env[1219]: time="2024-07-02T07:50:02.733624963Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:50:02.737687 env[1219]: time="2024-07-02T07:50:02.733926941Z" level=info msg="Connect containerd service" Jul 2 07:50:02.737687 env[1219]: time="2024-07-02T07:50:02.733995700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:50:02.737687 env[1219]: time="2024-07-02T07:50:02.735789336Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:50:02.737687 env[1219]: time="2024-07-02T07:50:02.736167549Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:50:02.737687 env[1219]: time="2024-07-02T07:50:02.736232650Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:50:02.736408 systemd[1]: Started containerd.service. Jul 2 07:50:02.739025 env[1219]: time="2024-07-02T07:50:02.738915096Z" level=info msg="Start subscribing containerd event" Jul 2 07:50:02.744584 dbus-daemon[1191]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 07:50:02.744755 systemd[1]: Started systemd-hostnamed.service. Jul 2 07:50:02.745558 dbus-daemon[1191]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1247 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 07:50:02.751422 env[1219]: time="2024-07-02T07:50:02.751380331Z" level=info msg="Start recovering state" Jul 2 07:50:02.757567 systemd[1]: Starting polkit.service... Jul 2 07:50:02.796110 env[1219]: time="2024-07-02T07:50:02.796066614Z" level=info msg="Start event monitor" Jul 2 07:50:02.811472 polkitd[1268]: Started polkitd version 121 Jul 2 07:50:02.818762 env[1219]: time="2024-07-02T07:50:02.815304998Z" level=info msg="Start snapshots syncer" Jul 2 07:50:02.818762 env[1219]: time="2024-07-02T07:50:02.815356448Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:50:02.818762 env[1219]: time="2024-07-02T07:50:02.815382979Z" level=info msg="Start streaming server" Jul 2 07:50:02.818762 env[1219]: time="2024-07-02T07:50:02.815534781Z" level=info msg="containerd successfully booted in 0.325587s" Jul 2 07:50:02.835248 polkitd[1268]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 07:50:02.835336 polkitd[1268]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 07:50:02.837121 polkitd[1268]: Finished loading, compiling and executing 2 rules Jul 2 07:50:02.837726 dbus-daemon[1191]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 07:50:02.837960 systemd[1]: Started polkit.service. Jul 2 07:50:02.838531 polkitd[1268]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 07:50:02.864715 systemd-hostnamed[1247]: Hostname set to (transient) Jul 2 07:50:02.868295 systemd-resolved[1159]: System hostname changed to 'ci-3510-3-5-16c78dad70e894834bf2.c.flatcar-212911.internal'. Jul 2 07:50:03.812315 systemd[1]: Created slice system-sshd.slice. Jul 2 07:50:04.213313 systemd[1]: Started kubelet.service. Jul 2 07:50:04.744529 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:50:05.042105 sshd_keygen[1214]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:50:05.081894 systemd[1]: Finished sshd-keygen.service. Jul 2 07:50:05.092228 systemd[1]: Starting issuegen.service... Jul 2 07:50:05.100992 systemd[1]: Started sshd@0-10.128.0.56:22-147.75.109.163:35362.service. Jul 2 07:50:05.114065 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:50:05.114321 systemd[1]: Finished issuegen.service. Jul 2 07:50:05.125009 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:50:05.138910 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:50:05.149302 systemd[1]: Started getty@tty1.service. Jul 2 07:50:05.158531 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:50:05.167323 systemd[1]: Reached target getty.target. Jul 2 07:50:05.458136 sshd[1297]: Accepted publickey for core from 147.75.109.163 port 35362 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:05.461799 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:05.480713 systemd[1]: Created slice user-500.slice. Jul 2 07:50:05.490047 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:50:05.501786 systemd-logind[1226]: New session 1 of user core. Jul 2 07:50:05.509716 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:50:05.520309 systemd[1]: Starting user@500.service... Jul 2 07:50:05.545067 (systemd)[1306]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:05.637898 kubelet[1282]: E0702 07:50:05.637621 1282 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:50:05.640692 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:50:05.640969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:50:05.641394 systemd[1]: kubelet.service: Consumed 1.436s CPU time. Jul 2 07:50:05.722529 systemd[1306]: Queued start job for default target default.target. Jul 2 07:50:05.723387 systemd[1306]: Reached target paths.target. Jul 2 07:50:05.723421 systemd[1306]: Reached target sockets.target. Jul 2 07:50:05.723445 systemd[1306]: Reached target timers.target. Jul 2 07:50:05.723465 systemd[1306]: Reached target basic.target. Jul 2 07:50:05.723537 systemd[1306]: Reached target default.target. Jul 2 07:50:05.723597 systemd[1306]: Startup finished in 165ms. Jul 2 07:50:05.723630 systemd[1]: Started user@500.service. Jul 2 07:50:05.733598 systemd[1]: Started session-1.scope. Jul 2 07:50:05.966458 systemd[1]: Started sshd@1-10.128.0.56:22-147.75.109.163:35370.service. Jul 2 07:50:06.285793 sshd[1315]: Accepted publickey for core from 147.75.109.163 port 35370 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:06.287307 sshd[1315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:06.296778 systemd[1]: Started session-2.scope. Jul 2 07:50:06.297937 systemd-logind[1226]: New session 2 of user core. Jul 2 07:50:06.505173 sshd[1315]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:06.511695 systemd[1]: sshd@1-10.128.0.56:22-147.75.109.163:35370.service: Deactivated successfully. Jul 2 07:50:06.512781 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:50:06.514916 systemd-logind[1226]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:50:06.516517 systemd-logind[1226]: Removed session 2. Jul 2 07:50:06.550080 systemd[1]: Started sshd@2-10.128.0.56:22-147.75.109.163:35378.service. Jul 2 07:50:06.848781 sshd[1321]: Accepted publickey for core from 147.75.109.163 port 35378 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:06.851456 sshd[1321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:06.859276 systemd[1]: Started session-3.scope. Jul 2 07:50:06.860586 systemd-logind[1226]: New session 3 of user core. Jul 2 07:50:07.066883 sshd[1321]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:07.071163 systemd[1]: sshd@2-10.128.0.56:22-147.75.109.163:35378.service: Deactivated successfully. Jul 2 07:50:07.072239 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:50:07.074409 systemd-logind[1226]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:50:07.076023 systemd-logind[1226]: Removed session 3. Jul 2 07:50:08.099142 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Jul 2 07:50:10.064921 kernel: loop2: detected capacity change from 0 to 2097152 Jul 2 07:50:10.082978 systemd-nspawn[1327]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Jul 2 07:50:10.082978 systemd-nspawn[1327]: Press ^] three times within 1s to kill container. Jul 2 07:50:10.095943 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:50:10.119213 systemd[1]: tmp-unified1pL9N4.mount: Deactivated successfully. Jul 2 07:50:10.187556 systemd[1]: Started oem-gce.service. Jul 2 07:50:10.188020 systemd[1]: Reached target multi-user.target. Jul 2 07:50:10.190177 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:50:10.200853 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:50:10.201113 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:50:10.204969 systemd[1]: Startup finished in 959ms (kernel) + 6.820s (initrd) + 15.596s (userspace) = 23.376s. Jul 2 07:50:10.231033 systemd-nspawn[1327]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jul 2 07:50:10.231195 systemd-nspawn[1327]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jul 2 07:50:10.231445 systemd-nspawn[1327]: + /usr/bin/google_instance_setup Jul 2 07:50:10.810049 instance-setup[1333]: INFO Running google_set_multiqueue. Jul 2 07:50:10.825591 instance-setup[1333]: INFO Set channels for eth0 to 2. Jul 2 07:50:10.829235 instance-setup[1333]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jul 2 07:50:10.830587 instance-setup[1333]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jul 2 07:50:10.830977 instance-setup[1333]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jul 2 07:50:10.832302 instance-setup[1333]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jul 2 07:50:10.832681 instance-setup[1333]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jul 2 07:50:10.834160 instance-setup[1333]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jul 2 07:50:10.834614 instance-setup[1333]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jul 2 07:50:10.836048 instance-setup[1333]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jul 2 07:50:10.847670 instance-setup[1333]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jul 2 07:50:10.847830 instance-setup[1333]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jul 2 07:50:10.884955 systemd-nspawn[1327]: + /usr/bin/google_metadata_script_runner --script-type startup Jul 2 07:50:11.206991 startup-script[1364]: INFO Starting startup scripts. Jul 2 07:50:11.220079 startup-script[1364]: INFO No startup scripts found in metadata. Jul 2 07:50:11.220250 startup-script[1364]: INFO Finished running startup scripts. Jul 2 07:50:11.251271 systemd-nspawn[1327]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jul 2 07:50:11.251271 systemd-nspawn[1327]: + daemon_pids=() Jul 2 07:50:11.251271 systemd-nspawn[1327]: + for d in accounts clock_skew network Jul 2 07:50:11.251952 systemd-nspawn[1327]: + daemon_pids+=($!) Jul 2 07:50:11.251952 systemd-nspawn[1327]: + for d in accounts clock_skew network Jul 2 07:50:11.252047 systemd-nspawn[1327]: + daemon_pids+=($!) Jul 2 07:50:11.252129 systemd-nspawn[1327]: + for d in accounts clock_skew network Jul 2 07:50:11.252435 systemd-nspawn[1327]: + daemon_pids+=($!) Jul 2 07:50:11.252590 systemd-nspawn[1327]: + NOTIFY_SOCKET=/run/systemd/notify Jul 2 07:50:11.252590 systemd-nspawn[1327]: + /usr/bin/google_accounts_daemon Jul 2 07:50:11.252706 systemd-nspawn[1327]: + /usr/bin/systemd-notify --ready Jul 2 07:50:11.253255 systemd-nspawn[1327]: + /usr/bin/google_network_daemon Jul 2 07:50:11.253587 systemd-nspawn[1327]: + /usr/bin/google_clock_skew_daemon Jul 2 07:50:11.302862 systemd-nspawn[1327]: + wait -n 36 37 38 Jul 2 07:50:11.842133 google-networking[1369]: INFO Starting Google Networking daemon. Jul 2 07:50:11.863773 google-clock-skew[1368]: INFO Starting Google Clock Skew daemon. Jul 2 07:50:11.878322 google-clock-skew[1368]: INFO Clock drift token has changed: 0. Jul 2 07:50:11.886748 systemd-nspawn[1327]: hwclock: Cannot access the Hardware Clock via any known method. Jul 2 07:50:11.886937 systemd-nspawn[1327]: hwclock: Use the --verbose option to see the details of our search for an access method. Jul 2 07:50:11.887655 google-clock-skew[1368]: WARNING Failed to sync system time with hardware clock. Jul 2 07:50:11.992727 groupadd[1379]: group added to /etc/group: name=google-sudoers, GID=1000 Jul 2 07:50:11.996338 groupadd[1379]: group added to /etc/gshadow: name=google-sudoers Jul 2 07:50:12.000083 groupadd[1379]: new group: name=google-sudoers, GID=1000 Jul 2 07:50:12.012655 google-accounts[1367]: INFO Starting Google Accounts daemon. Jul 2 07:50:12.036971 google-accounts[1367]: WARNING OS Login not installed. Jul 2 07:50:12.037953 google-accounts[1367]: INFO Creating a new user account for 0. Jul 2 07:50:12.043114 systemd-nspawn[1327]: useradd: invalid user name '0': use --badname to ignore Jul 2 07:50:12.043748 google-accounts[1367]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jul 2 07:50:15.770625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:50:15.770974 systemd[1]: Stopped kubelet.service. Jul 2 07:50:15.771044 systemd[1]: kubelet.service: Consumed 1.436s CPU time. Jul 2 07:50:15.773179 systemd[1]: Starting kubelet.service... Jul 2 07:50:15.997627 systemd[1]: Started kubelet.service. Jul 2 07:50:16.062367 kubelet[1393]: E0702 07:50:16.062204 1393 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:50:16.067028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:50:16.067249 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:50:17.106263 systemd[1]: Started sshd@3-10.128.0.56:22-147.75.109.163:32788.service. Jul 2 07:50:17.393259 sshd[1401]: Accepted publickey for core from 147.75.109.163 port 32788 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:17.395051 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:17.400942 systemd-logind[1226]: New session 4 of user core. Jul 2 07:50:17.402274 systemd[1]: Started session-4.scope. Jul 2 07:50:17.604736 sshd[1401]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:17.608767 systemd[1]: sshd@3-10.128.0.56:22-147.75.109.163:32788.service: Deactivated successfully. Jul 2 07:50:17.609790 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:50:17.610596 systemd-logind[1226]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:50:17.611772 systemd-logind[1226]: Removed session 4. Jul 2 07:50:17.650236 systemd[1]: Started sshd@4-10.128.0.56:22-147.75.109.163:32802.service. Jul 2 07:50:17.940456 sshd[1407]: Accepted publickey for core from 147.75.109.163 port 32802 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:17.942354 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:17.948785 systemd[1]: Started session-5.scope. Jul 2 07:50:17.949397 systemd-logind[1226]: New session 5 of user core. Jul 2 07:50:18.149750 sshd[1407]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:18.153803 systemd[1]: sshd@4-10.128.0.56:22-147.75.109.163:32802.service: Deactivated successfully. Jul 2 07:50:18.154899 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:50:18.155764 systemd-logind[1226]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:50:18.157053 systemd-logind[1226]: Removed session 5. Jul 2 07:50:18.195393 systemd[1]: Started sshd@5-10.128.0.56:22-147.75.109.163:32812.service. Jul 2 07:50:18.484135 sshd[1413]: Accepted publickey for core from 147.75.109.163 port 32812 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:18.485933 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:18.492579 systemd[1]: Started session-6.scope. Jul 2 07:50:18.493210 systemd-logind[1226]: New session 6 of user core. Jul 2 07:50:18.697548 sshd[1413]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:18.701246 systemd[1]: sshd@5-10.128.0.56:22-147.75.109.163:32812.service: Deactivated successfully. Jul 2 07:50:18.702248 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:50:18.703138 systemd-logind[1226]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:50:18.704465 systemd-logind[1226]: Removed session 6. Jul 2 07:50:18.744209 systemd[1]: Started sshd@6-10.128.0.56:22-147.75.109.163:32824.service. Jul 2 07:50:19.035737 sshd[1419]: Accepted publickey for core from 147.75.109.163 port 32824 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:19.037593 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:19.043984 systemd[1]: Started session-7.scope. Jul 2 07:50:19.044595 systemd-logind[1226]: New session 7 of user core. Jul 2 07:50:19.230625 sudo[1422]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:50:19.231058 sudo[1422]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:50:19.248540 systemd[1]: Starting coreos-metadata.service... Jul 2 07:50:19.298029 coreos-metadata[1426]: Jul 02 07:50:19.297 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jul 2 07:50:19.299762 coreos-metadata[1426]: Jul 02 07:50:19.299 INFO Fetch successful Jul 2 07:50:19.299927 coreos-metadata[1426]: Jul 02 07:50:19.299 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jul 2 07:50:19.300815 coreos-metadata[1426]: Jul 02 07:50:19.300 INFO Fetch successful Jul 2 07:50:19.300956 coreos-metadata[1426]: Jul 02 07:50:19.300 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jul 2 07:50:19.301640 coreos-metadata[1426]: Jul 02 07:50:19.301 INFO Fetch successful Jul 2 07:50:19.301738 coreos-metadata[1426]: Jul 02 07:50:19.301 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jul 2 07:50:19.302469 coreos-metadata[1426]: Jul 02 07:50:19.302 INFO Fetch successful Jul 2 07:50:19.312593 systemd[1]: Finished coreos-metadata.service. Jul 2 07:50:20.218479 systemd[1]: Stopped kubelet.service. Jul 2 07:50:20.222189 systemd[1]: Starting kubelet.service... Jul 2 07:50:20.253265 systemd[1]: Reloading. Jul 2 07:50:20.377653 /usr/lib/systemd/system-generators/torcx-generator[1484]: time="2024-07-02T07:50:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:50:20.377705 /usr/lib/systemd/system-generators/torcx-generator[1484]: time="2024-07-02T07:50:20Z" level=info msg="torcx already run" Jul 2 07:50:20.506028 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:50:20.506060 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:50:20.529512 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:50:20.677222 systemd[1]: Started kubelet.service. Jul 2 07:50:20.688579 systemd[1]: Stopping kubelet.service... Jul 2 07:50:20.689678 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:50:20.689958 systemd[1]: Stopped kubelet.service. Jul 2 07:50:20.692099 systemd[1]: Starting kubelet.service... Jul 2 07:50:20.885739 systemd[1]: Started kubelet.service. Jul 2 07:50:20.950199 kubelet[1539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:50:20.950598 kubelet[1539]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:50:20.950681 kubelet[1539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:50:20.950850 kubelet[1539]: I0702 07:50:20.950805 1539 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:50:21.929767 kubelet[1539]: I0702 07:50:21.929718 1539 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 07:50:21.929767 kubelet[1539]: I0702 07:50:21.929751 1539 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:50:21.930106 kubelet[1539]: I0702 07:50:21.930071 1539 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 07:50:21.981009 kubelet[1539]: I0702 07:50:21.980965 1539 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:50:21.997682 kubelet[1539]: I0702 07:50:21.997641 1539 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:50:21.999178 kubelet[1539]: I0702 07:50:21.999135 1539 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:50:21.999438 kubelet[1539]: I0702 07:50:21.999395 1539 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:50:21.999438 kubelet[1539]: I0702 07:50:21.999430 1539 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:50:21.999722 kubelet[1539]: I0702 07:50:21.999447 1539 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:50:21.999722 kubelet[1539]: I0702 07:50:21.999592 1539 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:50:21.999831 kubelet[1539]: I0702 07:50:21.999738 1539 kubelet.go:396] "Attempting to sync node with API server" Jul 2 07:50:21.999831 kubelet[1539]: I0702 07:50:21.999761 1539 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:50:21.999831 kubelet[1539]: I0702 07:50:21.999804 1539 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:50:21.999831 kubelet[1539]: I0702 07:50:21.999828 1539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:50:22.000374 kubelet[1539]: E0702 07:50:22.000328 1539 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:22.000493 kubelet[1539]: E0702 07:50:22.000402 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:22.001850 kubelet[1539]: I0702 07:50:22.001808 1539 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:50:22.006143 kubelet[1539]: I0702 07:50:22.006100 1539 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:50:22.006238 kubelet[1539]: W0702 07:50:22.006182 1539 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:50:22.006933 kubelet[1539]: I0702 07:50:22.006910 1539 server.go:1256] "Started kubelet" Jul 2 07:50:22.017131 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:50:22.017318 kubelet[1539]: I0702 07:50:22.017283 1539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:50:22.027325 kubelet[1539]: I0702 07:50:22.027279 1539 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:50:22.028651 kubelet[1539]: I0702 07:50:22.028619 1539 server.go:461] "Adding debug handlers to kubelet server" Jul 2 07:50:22.030319 kubelet[1539]: I0702 07:50:22.030296 1539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:50:22.030551 kubelet[1539]: I0702 07:50:22.030529 1539 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:50:22.033760 kubelet[1539]: I0702 07:50:22.033208 1539 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:50:22.033760 kubelet[1539]: I0702 07:50:22.033632 1539 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:50:22.034825 kubelet[1539]: I0702 07:50:22.034784 1539 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:50:22.036005 kubelet[1539]: I0702 07:50:22.035973 1539 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:50:22.036192 kubelet[1539]: I0702 07:50:22.036119 1539 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:50:22.039733 kubelet[1539]: I0702 07:50:22.039705 1539 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:50:22.056586 kubelet[1539]: E0702 07:50:22.056564 1539 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:50:22.065472 kubelet[1539]: E0702 07:50:22.065449 1539 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.56\" not found" node="10.128.0.56" Jul 2 07:50:22.068090 kubelet[1539]: I0702 07:50:22.067665 1539 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:50:22.068090 kubelet[1539]: I0702 07:50:22.067688 1539 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:50:22.068090 kubelet[1539]: I0702 07:50:22.067731 1539 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:50:22.070723 kubelet[1539]: I0702 07:50:22.070702 1539 policy_none.go:49] "None policy: Start" Jul 2 07:50:22.072079 kubelet[1539]: I0702 07:50:22.072061 1539 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:50:22.072244 kubelet[1539]: I0702 07:50:22.072230 1539 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:50:22.080443 systemd[1]: Created slice kubepods.slice. Jul 2 07:50:22.091150 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:50:22.095603 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:50:22.102834 kubelet[1539]: I0702 07:50:22.102797 1539 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:50:22.103524 kubelet[1539]: I0702 07:50:22.103473 1539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:50:22.107174 kubelet[1539]: E0702 07:50:22.106697 1539 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.56\" not found" Jul 2 07:50:22.135593 kubelet[1539]: I0702 07:50:22.135563 1539 kubelet_node_status.go:73] "Attempting to register node" node="10.128.0.56" Jul 2 07:50:22.141017 kubelet[1539]: I0702 07:50:22.139202 1539 kubelet_node_status.go:76] "Successfully registered node" node="10.128.0.56" Jul 2 07:50:22.157262 kubelet[1539]: I0702 07:50:22.157237 1539 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 2 07:50:22.158231 env[1219]: time="2024-07-02T07:50:22.158124402Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:50:22.158731 kubelet[1539]: I0702 07:50:22.158428 1539 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 2 07:50:22.183013 kubelet[1539]: I0702 07:50:22.182912 1539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:50:22.185295 kubelet[1539]: I0702 07:50:22.185267 1539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:50:22.185484 kubelet[1539]: I0702 07:50:22.185467 1539 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:50:22.185642 kubelet[1539]: I0702 07:50:22.185625 1539 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 07:50:22.185862 kubelet[1539]: E0702 07:50:22.185844 1539 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 07:50:22.932086 kubelet[1539]: I0702 07:50:22.932033 1539 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 2 07:50:22.932297 kubelet[1539]: W0702 07:50:22.932273 1539 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jul 2 07:50:22.932513 kubelet[1539]: W0702 07:50:22.932329 1539 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jul 2 07:50:22.932513 kubelet[1539]: W0702 07:50:22.932370 1539 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jul 2 07:50:23.001222 kubelet[1539]: I0702 07:50:23.001181 1539 apiserver.go:52] "Watching apiserver" Jul 2 07:50:23.001721 kubelet[1539]: E0702 07:50:23.001199 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:23.006180 kubelet[1539]: I0702 07:50:23.006138 1539 topology_manager.go:215] "Topology Admit Handler" podUID="d2d465cd-b932-4417-a9e5-b3042d8a5ebe" podNamespace="kube-system" podName="cilium-dgzxj" Jul 2 07:50:23.006357 kubelet[1539]: I0702 07:50:23.006312 1539 topology_manager.go:215] "Topology Admit Handler" podUID="6e7b5e52-f3b2-4edc-a42b-f4e3d0e89118" podNamespace="kube-system" podName="kube-proxy-t8wbb" Jul 2 07:50:23.014650 systemd[1]: Created slice kubepods-besteffort-pod6e7b5e52_f3b2_4edc_a42b_f4e3d0e89118.slice. Jul 2 07:50:23.026302 systemd[1]: Created slice kubepods-burstable-podd2d465cd_b932_4417_a9e5_b3042d8a5ebe.slice. Jul 2 07:50:23.034425 kubelet[1539]: I0702 07:50:23.034363 1539 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:50:23.041762 kubelet[1539]: I0702 07:50:23.041735 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-clustermesh-secrets\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.041964 kubelet[1539]: I0702 07:50:23.041943 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-config-path\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042067 kubelet[1539]: I0702 07:50:23.041986 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhc8r\" (UniqueName: \"kubernetes.io/projected/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-kube-api-access-rhc8r\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042067 kubelet[1539]: I0702 07:50:23.042024 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e7b5e52-f3b2-4edc-a42b-f4e3d0e89118-xtables-lock\") pod \"kube-proxy-t8wbb\" (UID: \"6e7b5e52-f3b2-4edc-a42b-f4e3d0e89118\") " pod="kube-system/kube-proxy-t8wbb" Jul 2 07:50:23.042067 kubelet[1539]: I0702 07:50:23.042060 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n78dl\" (UniqueName: \"kubernetes.io/projected/6e7b5e52-f3b2-4edc-a42b-f4e3d0e89118-kube-api-access-n78dl\") pod \"kube-proxy-t8wbb\" (UID: \"6e7b5e52-f3b2-4edc-a42b-f4e3d0e89118\") " pod="kube-system/kube-proxy-t8wbb" Jul 2 07:50:23.042227 kubelet[1539]: I0702 07:50:23.042104 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-cgroup\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042227 kubelet[1539]: I0702 07:50:23.042139 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cni-path\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042227 kubelet[1539]: I0702 07:50:23.042177 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-etc-cni-netd\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042227 kubelet[1539]: I0702 07:50:23.042210 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-xtables-lock\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042455 kubelet[1539]: I0702 07:50:23.042246 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-host-proc-sys-kernel\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042455 kubelet[1539]: I0702 07:50:23.042280 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-hubble-tls\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042455 kubelet[1539]: I0702 07:50:23.042318 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-host-proc-sys-net\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042455 kubelet[1539]: I0702 07:50:23.042353 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e7b5e52-f3b2-4edc-a42b-f4e3d0e89118-kube-proxy\") pod \"kube-proxy-t8wbb\" (UID: \"6e7b5e52-f3b2-4edc-a42b-f4e3d0e89118\") " pod="kube-system/kube-proxy-t8wbb" Jul 2 07:50:23.042455 kubelet[1539]: I0702 07:50:23.042402 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e7b5e52-f3b2-4edc-a42b-f4e3d0e89118-lib-modules\") pod \"kube-proxy-t8wbb\" (UID: \"6e7b5e52-f3b2-4edc-a42b-f4e3d0e89118\") " pod="kube-system/kube-proxy-t8wbb" Jul 2 07:50:23.042455 kubelet[1539]: I0702 07:50:23.042435 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-run\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042743 kubelet[1539]: I0702 07:50:23.042489 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-bpf-maps\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042743 kubelet[1539]: I0702 07:50:23.042523 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-hostproc\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.042743 kubelet[1539]: I0702 07:50:23.042563 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-lib-modules\") pod \"cilium-dgzxj\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " pod="kube-system/cilium-dgzxj" Jul 2 07:50:23.324419 env[1219]: time="2024-07-02T07:50:23.324345870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8wbb,Uid:6e7b5e52-f3b2-4edc-a42b-f4e3d0e89118,Namespace:kube-system,Attempt:0,}" Jul 2 07:50:23.335417 env[1219]: time="2024-07-02T07:50:23.335079163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dgzxj,Uid:d2d465cd-b932-4417-a9e5-b3042d8a5ebe,Namespace:kube-system,Attempt:0,}" Jul 2 07:50:23.404461 sudo[1422]: pam_unix(sudo:session): session closed for user root Jul 2 07:50:23.448902 sshd[1419]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:23.454616 systemd[1]: sshd@6-10.128.0.56:22-147.75.109.163:32824.service: Deactivated successfully. Jul 2 07:50:23.455518 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:50:23.457224 systemd-logind[1226]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:50:23.458658 systemd-logind[1226]: Removed session 7. Jul 2 07:50:23.834854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054052969.mount: Deactivated successfully. Jul 2 07:50:23.844617 env[1219]: time="2024-07-02T07:50:23.844557306Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:23.846744 env[1219]: time="2024-07-02T07:50:23.846690035Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:23.847764 env[1219]: time="2024-07-02T07:50:23.847711674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:23.851094 env[1219]: time="2024-07-02T07:50:23.851043766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:23.852294 env[1219]: time="2024-07-02T07:50:23.852247228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:23.853139 env[1219]: time="2024-07-02T07:50:23.853104254Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:23.855387 env[1219]: time="2024-07-02T07:50:23.855339677Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:23.856337 env[1219]: time="2024-07-02T07:50:23.856299004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:23.892451 env[1219]: time="2024-07-02T07:50:23.889304998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:23.892451 env[1219]: time="2024-07-02T07:50:23.889393857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:23.892451 env[1219]: time="2024-07-02T07:50:23.889417638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:23.892451 env[1219]: time="2024-07-02T07:50:23.890949668Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557 pid=1596 runtime=io.containerd.runc.v2 Jul 2 07:50:23.892785 env[1219]: time="2024-07-02T07:50:23.892006760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:23.892785 env[1219]: time="2024-07-02T07:50:23.892057696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:23.892785 env[1219]: time="2024-07-02T07:50:23.892075920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:23.892785 env[1219]: time="2024-07-02T07:50:23.892314324Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df84cb872635aa6ea9447a77deb468bb9848c457bc22c47be4399ec129d35fbc pid=1597 runtime=io.containerd.runc.v2 Jul 2 07:50:23.913172 systemd[1]: Started cri-containerd-2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557.scope. Jul 2 07:50:23.934608 systemd[1]: Started cri-containerd-df84cb872635aa6ea9447a77deb468bb9848c457bc22c47be4399ec129d35fbc.scope. Jul 2 07:50:23.976686 env[1219]: time="2024-07-02T07:50:23.976617673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dgzxj,Uid:d2d465cd-b932-4417-a9e5-b3042d8a5ebe,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\"" Jul 2 07:50:23.980429 env[1219]: time="2024-07-02T07:50:23.980374170Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:50:23.987437 env[1219]: time="2024-07-02T07:50:23.987391495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8wbb,Uid:6e7b5e52-f3b2-4edc-a42b-f4e3d0e89118,Namespace:kube-system,Attempt:0,} returns sandbox id \"df84cb872635aa6ea9447a77deb468bb9848c457bc22c47be4399ec129d35fbc\"" Jul 2 07:50:24.002053 kubelet[1539]: E0702 07:50:24.002013 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:25.002837 kubelet[1539]: E0702 07:50:25.002786 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:26.003683 kubelet[1539]: E0702 07:50:26.003581 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:27.003884 kubelet[1539]: E0702 07:50:27.003775 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:28.004385 kubelet[1539]: E0702 07:50:28.004320 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:29.005619 kubelet[1539]: E0702 07:50:29.005519 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:29.200802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3600081928.mount: Deactivated successfully. Jul 2 07:50:30.006056 kubelet[1539]: E0702 07:50:30.005950 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:31.006234 kubelet[1539]: E0702 07:50:31.006168 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:32.007304 kubelet[1539]: E0702 07:50:32.007217 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:32.433135 env[1219]: time="2024-07-02T07:50:32.433066812Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:32.435751 env[1219]: time="2024-07-02T07:50:32.435683822Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:32.445151 env[1219]: time="2024-07-02T07:50:32.445094377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:32.445763 env[1219]: time="2024-07-02T07:50:32.445712360Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:50:32.447488 env[1219]: time="2024-07-02T07:50:32.447437576Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 07:50:32.449322 env[1219]: time="2024-07-02T07:50:32.449268101Z" level=info msg="CreateContainer within sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:50:32.468628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195027002.mount: Deactivated successfully. Jul 2 07:50:32.477402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1035988386.mount: Deactivated successfully. Jul 2 07:50:32.487156 env[1219]: time="2024-07-02T07:50:32.487109325Z" level=info msg="CreateContainer within sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4\"" Jul 2 07:50:32.488008 env[1219]: time="2024-07-02T07:50:32.487983308Z" level=info msg="StartContainer for \"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4\"" Jul 2 07:50:32.518903 systemd[1]: Started cri-containerd-05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4.scope. Jul 2 07:50:32.558046 env[1219]: time="2024-07-02T07:50:32.557996268Z" level=info msg="StartContainer for \"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4\" returns successfully" Jul 2 07:50:32.570021 systemd[1]: cri-containerd-05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4.scope: Deactivated successfully. Jul 2 07:50:32.896374 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 07:50:33.008175 kubelet[1539]: E0702 07:50:33.008130 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:33.463019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4-rootfs.mount: Deactivated successfully. Jul 2 07:50:34.008614 kubelet[1539]: E0702 07:50:34.008550 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:34.386206 env[1219]: time="2024-07-02T07:50:34.386145469Z" level=info msg="shim disconnected" id=05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4 Jul 2 07:50:34.386819 env[1219]: time="2024-07-02T07:50:34.386760587Z" level=warning msg="cleaning up after shim disconnected" id=05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4 namespace=k8s.io Jul 2 07:50:34.386819 env[1219]: time="2024-07-02T07:50:34.386792142Z" level=info msg="cleaning up dead shim" Jul 2 07:50:34.398397 env[1219]: time="2024-07-02T07:50:34.398352120Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:50:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1722 runtime=io.containerd.runc.v2\n" Jul 2 07:50:35.009623 kubelet[1539]: E0702 07:50:35.009546 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:35.143727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736452190.mount: Deactivated successfully. Jul 2 07:50:35.265243 env[1219]: time="2024-07-02T07:50:35.264743583Z" level=info msg="CreateContainer within sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:50:35.287638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744103936.mount: Deactivated successfully. Jul 2 07:50:35.302061 env[1219]: time="2024-07-02T07:50:35.301988419Z" level=info msg="CreateContainer within sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86\"" Jul 2 07:50:35.302943 env[1219]: time="2024-07-02T07:50:35.302903068Z" level=info msg="StartContainer for \"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86\"" Jul 2 07:50:35.342081 systemd[1]: Started cri-containerd-108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86.scope. Jul 2 07:50:35.405996 env[1219]: time="2024-07-02T07:50:35.405938493Z" level=info msg="StartContainer for \"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86\" returns successfully" Jul 2 07:50:35.413432 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:50:35.413805 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:50:35.415554 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:50:35.418005 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:50:35.425500 systemd[1]: cri-containerd-108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86.scope: Deactivated successfully. Jul 2 07:50:35.435815 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:50:35.590676 env[1219]: time="2024-07-02T07:50:35.590611660Z" level=info msg="shim disconnected" id=108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86 Jul 2 07:50:35.591017 env[1219]: time="2024-07-02T07:50:35.590985035Z" level=warning msg="cleaning up after shim disconnected" id=108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86 namespace=k8s.io Jul 2 07:50:35.591126 env[1219]: time="2024-07-02T07:50:35.591105264Z" level=info msg="cleaning up dead shim" Jul 2 07:50:35.604090 env[1219]: time="2024-07-02T07:50:35.604044549Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:50:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1787 runtime=io.containerd.runc.v2\n" Jul 2 07:50:35.620055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485166163.mount: Deactivated successfully. Jul 2 07:50:35.979491 env[1219]: time="2024-07-02T07:50:35.979340265Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:35.981935 env[1219]: time="2024-07-02T07:50:35.981888123Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:35.983948 env[1219]: time="2024-07-02T07:50:35.983907645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:35.985613 env[1219]: time="2024-07-02T07:50:35.985575681Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:35.986186 env[1219]: time="2024-07-02T07:50:35.986145378Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 07:50:35.988904 env[1219]: time="2024-07-02T07:50:35.988832236Z" level=info msg="CreateContainer within sandbox \"df84cb872635aa6ea9447a77deb468bb9848c457bc22c47be4399ec129d35fbc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:50:36.005327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935131729.mount: Deactivated successfully. Jul 2 07:50:36.011424 kubelet[1539]: E0702 07:50:36.011389 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:36.015324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236465306.mount: Deactivated successfully. Jul 2 07:50:36.018699 env[1219]: time="2024-07-02T07:50:36.018658994Z" level=info msg="CreateContainer within sandbox \"df84cb872635aa6ea9447a77deb468bb9848c457bc22c47be4399ec129d35fbc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb712f4c669cde656b7e45ea2e021bc785b67bd63e7998513d73f716efee4757\"" Jul 2 07:50:36.019369 env[1219]: time="2024-07-02T07:50:36.019334115Z" level=info msg="StartContainer for \"eb712f4c669cde656b7e45ea2e021bc785b67bd63e7998513d73f716efee4757\"" Jul 2 07:50:36.049525 systemd[1]: Started cri-containerd-eb712f4c669cde656b7e45ea2e021bc785b67bd63e7998513d73f716efee4757.scope. Jul 2 07:50:36.097991 env[1219]: time="2024-07-02T07:50:36.097915786Z" level=info msg="StartContainer for \"eb712f4c669cde656b7e45ea2e021bc785b67bd63e7998513d73f716efee4757\" returns successfully" Jul 2 07:50:36.272650 env[1219]: time="2024-07-02T07:50:36.272595201Z" level=info msg="CreateContainer within sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:50:36.276498 kubelet[1539]: I0702 07:50:36.276459 1539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-t8wbb" podStartSLOduration=2.278430488 podStartE2EDuration="14.276392624s" podCreationTimestamp="2024-07-02 07:50:22 +0000 UTC" firstStartedPulling="2024-07-02 07:50:23.988540454 +0000 UTC m=+3.095624913" lastFinishedPulling="2024-07-02 07:50:35.98650258 +0000 UTC m=+15.093587049" observedRunningTime="2024-07-02 07:50:36.275948747 +0000 UTC m=+15.383033222" watchObservedRunningTime="2024-07-02 07:50:36.276392624 +0000 UTC m=+15.383477107" Jul 2 07:50:36.294373 env[1219]: time="2024-07-02T07:50:36.294304648Z" level=info msg="CreateContainer within sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89\"" Jul 2 07:50:36.295374 env[1219]: time="2024-07-02T07:50:36.295325831Z" level=info msg="StartContainer for \"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89\"" Jul 2 07:50:36.323086 systemd[1]: Started cri-containerd-eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89.scope. Jul 2 07:50:36.385409 env[1219]: time="2024-07-02T07:50:36.385337935Z" level=info msg="StartContainer for \"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89\" returns successfully" Jul 2 07:50:36.387394 systemd[1]: cri-containerd-eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89.scope: Deactivated successfully. Jul 2 07:50:36.526539 env[1219]: time="2024-07-02T07:50:36.526400762Z" level=info msg="shim disconnected" id=eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89 Jul 2 07:50:36.527393 env[1219]: time="2024-07-02T07:50:36.527342691Z" level=warning msg="cleaning up after shim disconnected" id=eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89 namespace=k8s.io Jul 2 07:50:36.528079 env[1219]: time="2024-07-02T07:50:36.528043048Z" level=info msg="cleaning up dead shim" Jul 2 07:50:36.544901 env[1219]: time="2024-07-02T07:50:36.544774358Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:50:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1987 runtime=io.containerd.runc.v2\n" Jul 2 07:50:37.012114 kubelet[1539]: E0702 07:50:37.011967 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:37.276169 env[1219]: time="2024-07-02T07:50:37.276117637Z" level=info msg="CreateContainer within sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:50:37.298533 env[1219]: time="2024-07-02T07:50:37.298471592Z" level=info msg="CreateContainer within sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb\"" Jul 2 07:50:37.299235 env[1219]: time="2024-07-02T07:50:37.299188703Z" level=info msg="StartContainer for \"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb\"" Jul 2 07:50:37.327213 systemd[1]: Started cri-containerd-6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb.scope. Jul 2 07:50:37.366223 systemd[1]: cri-containerd-6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb.scope: Deactivated successfully. Jul 2 07:50:37.368178 env[1219]: time="2024-07-02T07:50:37.367667730Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2d465cd_b932_4417_a9e5_b3042d8a5ebe.slice/cri-containerd-6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb.scope/memory.events\": no such file or directory" Jul 2 07:50:37.370591 env[1219]: time="2024-07-02T07:50:37.370526503Z" level=info msg="StartContainer for \"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb\" returns successfully" Jul 2 07:50:37.394702 env[1219]: time="2024-07-02T07:50:37.394644514Z" level=info msg="shim disconnected" id=6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb Jul 2 07:50:37.394981 env[1219]: time="2024-07-02T07:50:37.394704561Z" level=warning msg="cleaning up after shim disconnected" id=6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb namespace=k8s.io Jul 2 07:50:37.394981 env[1219]: time="2024-07-02T07:50:37.394720413Z" level=info msg="cleaning up dead shim" Jul 2 07:50:37.405019 env[1219]: time="2024-07-02T07:50:37.404975291Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:50:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2062 runtime=io.containerd.runc.v2\n" Jul 2 07:50:37.620307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb-rootfs.mount: Deactivated successfully. Jul 2 07:50:38.012459 kubelet[1539]: E0702 07:50:38.012324 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:38.280965 env[1219]: time="2024-07-02T07:50:38.280896670Z" level=info msg="CreateContainer within sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:50:38.305330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297757561.mount: Deactivated successfully. Jul 2 07:50:38.315521 env[1219]: time="2024-07-02T07:50:38.315464147Z" level=info msg="CreateContainer within sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\"" Jul 2 07:50:38.316296 env[1219]: time="2024-07-02T07:50:38.316239777Z" level=info msg="StartContainer for \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\"" Jul 2 07:50:38.339971 systemd[1]: Started cri-containerd-59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e.scope. Jul 2 07:50:38.386522 env[1219]: time="2024-07-02T07:50:38.386468902Z" level=info msg="StartContainer for \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\" returns successfully" Jul 2 07:50:38.578313 kubelet[1539]: I0702 07:50:38.577711 1539 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 07:50:38.924034 kernel: Initializing XFRM netlink socket Jul 2 07:50:39.013094 kubelet[1539]: E0702 07:50:39.013014 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:39.308117 kubelet[1539]: I0702 07:50:39.308080 1539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dgzxj" podStartSLOduration=8.840975968 podStartE2EDuration="17.308013613s" podCreationTimestamp="2024-07-02 07:50:22 +0000 UTC" firstStartedPulling="2024-07-02 07:50:23.979285824 +0000 UTC m=+3.086370269" lastFinishedPulling="2024-07-02 07:50:32.446323461 +0000 UTC m=+11.553407914" observedRunningTime="2024-07-02 07:50:39.305963213 +0000 UTC m=+18.413047694" watchObservedRunningTime="2024-07-02 07:50:39.308013613 +0000 UTC m=+18.415098085" Jul 2 07:50:40.014124 kubelet[1539]: E0702 07:50:40.014051 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:40.580341 systemd-networkd[1028]: cilium_host: Link UP Jul 2 07:50:40.595248 systemd-networkd[1028]: cilium_net: Link UP Jul 2 07:50:40.596319 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 07:50:40.596386 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:50:40.595554 systemd-networkd[1028]: cilium_net: Gained carrier Jul 2 07:50:40.596902 systemd-networkd[1028]: cilium_host: Gained carrier Jul 2 07:50:40.734429 systemd-networkd[1028]: cilium_vxlan: Link UP Jul 2 07:50:40.734441 systemd-networkd[1028]: cilium_vxlan: Gained carrier Jul 2 07:50:40.992906 kernel: NET: Registered PF_ALG protocol family Jul 2 07:50:41.015310 kubelet[1539]: E0702 07:50:41.015266 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:41.174521 systemd-networkd[1028]: cilium_host: Gained IPv6LL Jul 2 07:50:41.366338 systemd-networkd[1028]: cilium_net: Gained IPv6LL Jul 2 07:50:41.618770 kubelet[1539]: I0702 07:50:41.618645 1539 topology_manager.go:215] "Topology Admit Handler" podUID="ea7a7458-54f6-46eb-8513-2eb7bcd7c875" podNamespace="default" podName="nginx-deployment-6d5f899847-7bzkd" Jul 2 07:50:41.627557 systemd[1]: Created slice kubepods-besteffort-podea7a7458_54f6_46eb_8513_2eb7bcd7c875.slice. Jul 2 07:50:41.663900 kubelet[1539]: I0702 07:50:41.663855 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bnrc\" (UniqueName: \"kubernetes.io/projected/ea7a7458-54f6-46eb-8513-2eb7bcd7c875-kube-api-access-5bnrc\") pod \"nginx-deployment-6d5f899847-7bzkd\" (UID: \"ea7a7458-54f6-46eb-8513-2eb7bcd7c875\") " pod="default/nginx-deployment-6d5f899847-7bzkd" Jul 2 07:50:41.789537 systemd-networkd[1028]: lxc_health: Link UP Jul 2 07:50:41.807432 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:50:41.809390 systemd-networkd[1028]: lxc_health: Gained carrier Jul 2 07:50:41.934698 env[1219]: time="2024-07-02T07:50:41.934552434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7bzkd,Uid:ea7a7458-54f6-46eb-8513-2eb7bcd7c875,Namespace:default,Attempt:0,}" Jul 2 07:50:42.000279 kubelet[1539]: E0702 07:50:42.000229 1539 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:42.015494 kubelet[1539]: E0702 07:50:42.015450 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:42.134380 systemd-networkd[1028]: cilium_vxlan: Gained IPv6LL Jul 2 07:50:42.501698 systemd-networkd[1028]: lxc3cc741f73c11: Link UP Jul 2 07:50:42.511935 kernel: eth0: renamed from tmp61dab Jul 2 07:50:42.531148 systemd-networkd[1028]: lxc3cc741f73c11: Gained carrier Jul 2 07:50:42.531914 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3cc741f73c11: link becomes ready Jul 2 07:50:43.016005 kubelet[1539]: E0702 07:50:43.015944 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:43.094001 systemd-networkd[1028]: lxc_health: Gained IPv6LL Jul 2 07:50:44.017102 kubelet[1539]: E0702 07:50:44.017048 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:44.054471 systemd-networkd[1028]: lxc3cc741f73c11: Gained IPv6LL Jul 2 07:50:45.018842 kubelet[1539]: E0702 07:50:45.018792 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:46.019882 kubelet[1539]: E0702 07:50:46.019819 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:46.949379 env[1219]: time="2024-07-02T07:50:46.949251931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:46.949379 env[1219]: time="2024-07-02T07:50:46.949311825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:46.949379 env[1219]: time="2024-07-02T07:50:46.949330190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:46.950131 env[1219]: time="2024-07-02T07:50:46.949687576Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61dabb8e91cd1c42f4480b71fd5f0c432ad9f16963c57a6bdc223fb9e6ee5170 pid=2575 runtime=io.containerd.runc.v2 Jul 2 07:50:46.969644 systemd[1]: Started cri-containerd-61dabb8e91cd1c42f4480b71fd5f0c432ad9f16963c57a6bdc223fb9e6ee5170.scope. Jul 2 07:50:46.980224 systemd[1]: run-containerd-runc-k8s.io-61dabb8e91cd1c42f4480b71fd5f0c432ad9f16963c57a6bdc223fb9e6ee5170-runc.kpRsOl.mount: Deactivated successfully. Jul 2 07:50:47.021031 kubelet[1539]: E0702 07:50:47.020966 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:47.039631 env[1219]: time="2024-07-02T07:50:47.039569078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-7bzkd,Uid:ea7a7458-54f6-46eb-8513-2eb7bcd7c875,Namespace:default,Attempt:0,} returns sandbox id \"61dabb8e91cd1c42f4480b71fd5f0c432ad9f16963c57a6bdc223fb9e6ee5170\"" Jul 2 07:50:47.041679 env[1219]: time="2024-07-02T07:50:47.041639658Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 07:50:47.830012 update_engine[1212]: I0702 07:50:47.829947 1212 update_attempter.cc:509] Updating boot flags... Jul 2 07:50:48.021338 kubelet[1539]: E0702 07:50:48.021238 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:49.022499 kubelet[1539]: E0702 07:50:49.022429 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:49.612194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3994690181.mount: Deactivated successfully. Jul 2 07:50:50.022784 kubelet[1539]: E0702 07:50:50.022705 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:50.101918 kubelet[1539]: I0702 07:50:50.101644 1539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:50:51.023890 kubelet[1539]: E0702 07:50:51.023811 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:51.288092 env[1219]: time="2024-07-02T07:50:51.287586012Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:51.292974 env[1219]: time="2024-07-02T07:50:51.292929657Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:51.298021 env[1219]: time="2024-07-02T07:50:51.297971178Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:51.299086 env[1219]: time="2024-07-02T07:50:51.299044709Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:50:51.300237 env[1219]: time="2024-07-02T07:50:51.300140428Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 07:50:51.303328 env[1219]: time="2024-07-02T07:50:51.303233899Z" level=info msg="CreateContainer within sandbox \"61dabb8e91cd1c42f4480b71fd5f0c432ad9f16963c57a6bdc223fb9e6ee5170\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 2 07:50:51.319016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2419711197.mount: Deactivated successfully. Jul 2 07:50:51.328594 env[1219]: time="2024-07-02T07:50:51.328553308Z" level=info msg="CreateContainer within sandbox \"61dabb8e91cd1c42f4480b71fd5f0c432ad9f16963c57a6bdc223fb9e6ee5170\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"549432165fde407a3cc2e254b9cbad5f933809af0e10ef63ecd64d8e18db76a9\"" Jul 2 07:50:51.329442 env[1219]: time="2024-07-02T07:50:51.329411233Z" level=info msg="StartContainer for \"549432165fde407a3cc2e254b9cbad5f933809af0e10ef63ecd64d8e18db76a9\"" Jul 2 07:50:51.358426 systemd[1]: Started cri-containerd-549432165fde407a3cc2e254b9cbad5f933809af0e10ef63ecd64d8e18db76a9.scope. Jul 2 07:50:51.394960 env[1219]: time="2024-07-02T07:50:51.394909337Z" level=info msg="StartContainer for \"549432165fde407a3cc2e254b9cbad5f933809af0e10ef63ecd64d8e18db76a9\" returns successfully" Jul 2 07:50:52.024185 kubelet[1539]: E0702 07:50:52.024121 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:52.384964 kubelet[1539]: I0702 07:50:52.384802 1539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-7bzkd" podStartSLOduration=7.125389487 podStartE2EDuration="11.384753044s" podCreationTimestamp="2024-07-02 07:50:41 +0000 UTC" firstStartedPulling="2024-07-02 07:50:47.041221122 +0000 UTC m=+26.148305570" lastFinishedPulling="2024-07-02 07:50:51.300584676 +0000 UTC m=+30.407669127" observedRunningTime="2024-07-02 07:50:52.384584608 +0000 UTC m=+31.491669083" watchObservedRunningTime="2024-07-02 07:50:52.384753044 +0000 UTC m=+31.491837531" Jul 2 07:50:53.025232 kubelet[1539]: E0702 07:50:53.025180 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:54.026262 kubelet[1539]: E0702 07:50:54.026200 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:55.026587 kubelet[1539]: E0702 07:50:55.026528 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:56.027359 kubelet[1539]: E0702 07:50:56.027302 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:56.241039 kubelet[1539]: I0702 07:50:56.240991 1539 topology_manager.go:215] "Topology Admit Handler" podUID="f097d73e-b3ee-456e-bb12-4b1939ca72a1" podNamespace="default" podName="nfs-server-provisioner-0" Jul 2 07:50:56.248273 systemd[1]: Created slice kubepods-besteffort-podf097d73e_b3ee_456e_bb12_4b1939ca72a1.slice. Jul 2 07:50:56.260460 kubelet[1539]: I0702 07:50:56.260402 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f097d73e-b3ee-456e-bb12-4b1939ca72a1-data\") pod \"nfs-server-provisioner-0\" (UID: \"f097d73e-b3ee-456e-bb12-4b1939ca72a1\") " pod="default/nfs-server-provisioner-0" Jul 2 07:50:56.260615 kubelet[1539]: I0702 07:50:56.260464 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hml7\" (UniqueName: \"kubernetes.io/projected/f097d73e-b3ee-456e-bb12-4b1939ca72a1-kube-api-access-6hml7\") pod \"nfs-server-provisioner-0\" (UID: \"f097d73e-b3ee-456e-bb12-4b1939ca72a1\") " pod="default/nfs-server-provisioner-0" Jul 2 07:50:56.552861 env[1219]: time="2024-07-02T07:50:56.552801404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f097d73e-b3ee-456e-bb12-4b1939ca72a1,Namespace:default,Attempt:0,}" Jul 2 07:50:56.595678 systemd-networkd[1028]: lxce761e6bc39e0: Link UP Jul 2 07:50:56.604922 kernel: eth0: renamed from tmp48991 Jul 2 07:50:56.632556 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:50:56.632667 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce761e6bc39e0: link becomes ready Jul 2 07:50:56.632805 systemd-networkd[1028]: lxce761e6bc39e0: Gained carrier Jul 2 07:50:56.909178 env[1219]: time="2024-07-02T07:50:56.908601062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:50:56.909178 env[1219]: time="2024-07-02T07:50:56.908708153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:50:56.909178 env[1219]: time="2024-07-02T07:50:56.908748674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:50:56.909178 env[1219]: time="2024-07-02T07:50:56.908970181Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/489918f5549643c738583b10eae5073603438de017f0576bb7a174fadf1b926f pid=2713 runtime=io.containerd.runc.v2 Jul 2 07:50:56.937495 systemd[1]: Started cri-containerd-489918f5549643c738583b10eae5073603438de017f0576bb7a174fadf1b926f.scope. Jul 2 07:50:56.999575 env[1219]: time="2024-07-02T07:50:56.999521899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f097d73e-b3ee-456e-bb12-4b1939ca72a1,Namespace:default,Attempt:0,} returns sandbox id \"489918f5549643c738583b10eae5073603438de017f0576bb7a174fadf1b926f\"" Jul 2 07:50:57.001939 env[1219]: time="2024-07-02T07:50:57.001833683Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 2 07:50:57.028362 kubelet[1539]: E0702 07:50:57.028318 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:57.375252 systemd[1]: run-containerd-runc-k8s.io-489918f5549643c738583b10eae5073603438de017f0576bb7a174fadf1b926f-runc.CuuQ9t.mount: Deactivated successfully. Jul 2 07:50:58.029342 kubelet[1539]: E0702 07:50:58.029299 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:58.582430 systemd-networkd[1028]: lxce761e6bc39e0: Gained IPv6LL Jul 2 07:50:59.030463 kubelet[1539]: E0702 07:50:59.030374 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:50:59.559587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930335817.mount: Deactivated successfully. Jul 2 07:51:00.031006 kubelet[1539]: E0702 07:51:00.030943 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:01.031172 kubelet[1539]: E0702 07:51:01.031088 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:01.921578 env[1219]: time="2024-07-02T07:51:01.921499000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:01.924714 env[1219]: time="2024-07-02T07:51:01.924670625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:01.927128 env[1219]: time="2024-07-02T07:51:01.927094131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:01.929464 env[1219]: time="2024-07-02T07:51:01.929414281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:01.930553 env[1219]: time="2024-07-02T07:51:01.930511600Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 2 07:51:01.933606 env[1219]: time="2024-07-02T07:51:01.933557316Z" level=info msg="CreateContainer within sandbox \"489918f5549643c738583b10eae5073603438de017f0576bb7a174fadf1b926f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 2 07:51:01.949074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033631894.mount: Deactivated successfully. Jul 2 07:51:01.960242 env[1219]: time="2024-07-02T07:51:01.960193526Z" level=info msg="CreateContainer within sandbox \"489918f5549643c738583b10eae5073603438de017f0576bb7a174fadf1b926f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c0f5dfdfc2896e3de9c034f1624c7c6b522fb1313b06034c557fc825f6158d87\"" Jul 2 07:51:01.961120 env[1219]: time="2024-07-02T07:51:01.961083029Z" level=info msg="StartContainer for \"c0f5dfdfc2896e3de9c034f1624c7c6b522fb1313b06034c557fc825f6158d87\"" Jul 2 07:51:01.995558 systemd[1]: Started cri-containerd-c0f5dfdfc2896e3de9c034f1624c7c6b522fb1313b06034c557fc825f6158d87.scope. Jul 2 07:51:02.000075 kubelet[1539]: E0702 07:51:02.000042 1539 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:02.031782 kubelet[1539]: E0702 07:51:02.031676 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:02.035623 env[1219]: time="2024-07-02T07:51:02.035531250Z" level=info msg="StartContainer for \"c0f5dfdfc2896e3de9c034f1624c7c6b522fb1313b06034c557fc825f6158d87\" returns successfully" Jul 2 07:51:03.032742 kubelet[1539]: E0702 07:51:03.032672 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:04.033441 kubelet[1539]: E0702 07:51:04.033383 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:05.033736 kubelet[1539]: E0702 07:51:05.033669 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:06.034184 kubelet[1539]: E0702 07:51:06.034126 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:07.034855 kubelet[1539]: E0702 07:51:07.034792 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:08.035602 kubelet[1539]: E0702 07:51:08.035532 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:09.036683 kubelet[1539]: E0702 07:51:09.036612 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:10.037676 kubelet[1539]: E0702 07:51:10.037610 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:11.038444 kubelet[1539]: E0702 07:51:11.038376 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:11.577926 kubelet[1539]: I0702 07:51:11.577858 1539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.648127159 podStartE2EDuration="15.577795392s" podCreationTimestamp="2024-07-02 07:50:56 +0000 UTC" firstStartedPulling="2024-07-02 07:50:57.001289647 +0000 UTC m=+36.108374095" lastFinishedPulling="2024-07-02 07:51:01.930957866 +0000 UTC m=+41.038042328" observedRunningTime="2024-07-02 07:51:02.430141922 +0000 UTC m=+41.537226390" watchObservedRunningTime="2024-07-02 07:51:11.577795392 +0000 UTC m=+50.684879912" Jul 2 07:51:11.578258 kubelet[1539]: I0702 07:51:11.578229 1539 topology_manager.go:215] "Topology Admit Handler" podUID="9ad345c5-324b-41e7-9ac6-71f9db868962" podNamespace="default" podName="test-pod-1" Jul 2 07:51:11.585191 systemd[1]: Created slice kubepods-besteffort-pod9ad345c5_324b_41e7_9ac6_71f9db868962.slice. Jul 2 07:51:11.654302 kubelet[1539]: I0702 07:51:11.654254 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvbmf\" (UniqueName: \"kubernetes.io/projected/9ad345c5-324b-41e7-9ac6-71f9db868962-kube-api-access-wvbmf\") pod \"test-pod-1\" (UID: \"9ad345c5-324b-41e7-9ac6-71f9db868962\") " pod="default/test-pod-1" Jul 2 07:51:11.654505 kubelet[1539]: I0702 07:51:11.654317 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6733b8f8-3106-4617-ab1e-3332ee160d79\" (UniqueName: \"kubernetes.io/nfs/9ad345c5-324b-41e7-9ac6-71f9db868962-pvc-6733b8f8-3106-4617-ab1e-3332ee160d79\") pod \"test-pod-1\" (UID: \"9ad345c5-324b-41e7-9ac6-71f9db868962\") " pod="default/test-pod-1" Jul 2 07:51:11.798909 kernel: FS-Cache: Loaded Jul 2 07:51:11.860827 kernel: RPC: Registered named UNIX socket transport module. Jul 2 07:51:11.861011 kernel: RPC: Registered udp transport module. Jul 2 07:51:11.861057 kernel: RPC: Registered tcp transport module. Jul 2 07:51:11.865651 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 2 07:51:11.951889 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 2 07:51:12.039357 kubelet[1539]: E0702 07:51:12.039315 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:12.180517 kernel: NFS: Registering the id_resolver key type Jul 2 07:51:12.180701 kernel: Key type id_resolver registered Jul 2 07:51:12.185903 kernel: Key type id_legacy registered Jul 2 07:51:12.239156 nfsidmap[2867]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Jul 2 07:51:12.247929 nfsidmap[2868]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Jul 2 07:51:12.489632 env[1219]: time="2024-07-02T07:51:12.489482214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9ad345c5-324b-41e7-9ac6-71f9db868962,Namespace:default,Attempt:0,}" Jul 2 07:51:12.533123 systemd-networkd[1028]: lxccbe626b388f3: Link UP Jul 2 07:51:12.543910 kernel: eth0: renamed from tmpc766a Jul 2 07:51:12.561139 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:51:12.561236 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccbe626b388f3: link becomes ready Jul 2 07:51:12.562100 systemd-networkd[1028]: lxccbe626b388f3: Gained carrier Jul 2 07:51:12.840935 env[1219]: time="2024-07-02T07:51:12.840816165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:51:12.841195 env[1219]: time="2024-07-02T07:51:12.840903313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:51:12.841195 env[1219]: time="2024-07-02T07:51:12.840924564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:51:12.841195 env[1219]: time="2024-07-02T07:51:12.841095682Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c766ad940928281fd5c4d8bd32de60fface2e0053063ec4f6e3b1b4a1d3ca8b8 pid=2895 runtime=io.containerd.runc.v2 Jul 2 07:51:12.867515 systemd[1]: Started cri-containerd-c766ad940928281fd5c4d8bd32de60fface2e0053063ec4f6e3b1b4a1d3ca8b8.scope. Jul 2 07:51:12.923813 env[1219]: time="2024-07-02T07:51:12.923757746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9ad345c5-324b-41e7-9ac6-71f9db868962,Namespace:default,Attempt:0,} returns sandbox id \"c766ad940928281fd5c4d8bd32de60fface2e0053063ec4f6e3b1b4a1d3ca8b8\"" Jul 2 07:51:12.926014 env[1219]: time="2024-07-02T07:51:12.925957073Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 07:51:13.040338 kubelet[1539]: E0702 07:51:13.040274 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:13.117354 env[1219]: time="2024-07-02T07:51:13.116767240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:13.119695 env[1219]: time="2024-07-02T07:51:13.119645655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:13.122051 env[1219]: time="2024-07-02T07:51:13.122013321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:13.124389 env[1219]: time="2024-07-02T07:51:13.124352550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:13.125267 env[1219]: time="2024-07-02T07:51:13.125217783Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 07:51:13.128229 env[1219]: time="2024-07-02T07:51:13.128174136Z" level=info msg="CreateContainer within sandbox \"c766ad940928281fd5c4d8bd32de60fface2e0053063ec4f6e3b1b4a1d3ca8b8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 2 07:51:13.151970 env[1219]: time="2024-07-02T07:51:13.151913532Z" level=info msg="CreateContainer within sandbox \"c766ad940928281fd5c4d8bd32de60fface2e0053063ec4f6e3b1b4a1d3ca8b8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b31bcd5303609c95757c18fa7926b84ccfd31df3c6400bfe8bf3123d7c14367d\"" Jul 2 07:51:13.152721 env[1219]: time="2024-07-02T07:51:13.152676344Z" level=info msg="StartContainer for \"b31bcd5303609c95757c18fa7926b84ccfd31df3c6400bfe8bf3123d7c14367d\"" Jul 2 07:51:13.175711 systemd[1]: Started cri-containerd-b31bcd5303609c95757c18fa7926b84ccfd31df3c6400bfe8bf3123d7c14367d.scope. Jul 2 07:51:13.214696 env[1219]: time="2024-07-02T07:51:13.214642209Z" level=info msg="StartContainer for \"b31bcd5303609c95757c18fa7926b84ccfd31df3c6400bfe8bf3123d7c14367d\" returns successfully" Jul 2 07:51:13.454599 kubelet[1539]: I0702 07:51:13.453939 1539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.253527411 podStartE2EDuration="17.45388334s" podCreationTimestamp="2024-07-02 07:50:56 +0000 UTC" firstStartedPulling="2024-07-02 07:51:12.92526185 +0000 UTC m=+52.032346295" lastFinishedPulling="2024-07-02 07:51:13.125617768 +0000 UTC m=+52.232702224" observedRunningTime="2024-07-02 07:51:13.453622378 +0000 UTC m=+52.560706851" watchObservedRunningTime="2024-07-02 07:51:13.45388334 +0000 UTC m=+52.560967805" Jul 2 07:51:14.041346 kubelet[1539]: E0702 07:51:14.041285 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:14.390131 systemd-networkd[1028]: lxccbe626b388f3: Gained IPv6LL Jul 2 07:51:15.042223 kubelet[1539]: E0702 07:51:15.042165 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:16.042847 kubelet[1539]: E0702 07:51:16.042789 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:16.649970 systemd[1]: run-containerd-runc-k8s.io-59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e-runc.QTrGSt.mount: Deactivated successfully. Jul 2 07:51:16.674001 env[1219]: time="2024-07-02T07:51:16.673928171Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:51:16.681906 env[1219]: time="2024-07-02T07:51:16.681831776Z" level=info msg="StopContainer for \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\" with timeout 2 (s)" Jul 2 07:51:16.682398 env[1219]: time="2024-07-02T07:51:16.682360767Z" level=info msg="Stop container \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\" with signal terminated" Jul 2 07:51:16.690695 systemd-networkd[1028]: lxc_health: Link DOWN Jul 2 07:51:16.690706 systemd-networkd[1028]: lxc_health: Lost carrier Jul 2 07:51:16.713480 systemd[1]: cri-containerd-59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e.scope: Deactivated successfully. Jul 2 07:51:16.713829 systemd[1]: cri-containerd-59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e.scope: Consumed 8.581s CPU time. Jul 2 07:51:16.748508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e-rootfs.mount: Deactivated successfully. Jul 2 07:51:17.043996 kubelet[1539]: E0702 07:51:17.043932 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:17.122333 kubelet[1539]: E0702 07:51:17.122286 1539 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:51:18.044153 kubelet[1539]: E0702 07:51:18.044108 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:18.494983 env[1219]: time="2024-07-02T07:51:18.494799392Z" level=info msg="shim disconnected" id=59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e Jul 2 07:51:18.495522 env[1219]: time="2024-07-02T07:51:18.494936608Z" level=warning msg="cleaning up after shim disconnected" id=59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e namespace=k8s.io Jul 2 07:51:18.495522 env[1219]: time="2024-07-02T07:51:18.495300567Z" level=info msg="cleaning up dead shim" Jul 2 07:51:18.507678 env[1219]: time="2024-07-02T07:51:18.507625551Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3030 runtime=io.containerd.runc.v2\n" Jul 2 07:51:18.511264 env[1219]: time="2024-07-02T07:51:18.511208670Z" level=info msg="StopContainer for \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\" returns successfully" Jul 2 07:51:18.512135 env[1219]: time="2024-07-02T07:51:18.512085870Z" level=info msg="StopPodSandbox for \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\"" Jul 2 07:51:18.512280 env[1219]: time="2024-07-02T07:51:18.512157800Z" level=info msg="Container to stop \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:18.512280 env[1219]: time="2024-07-02T07:51:18.512181113Z" level=info msg="Container to stop \"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:18.512280 env[1219]: time="2024-07-02T07:51:18.512200440Z" level=info msg="Container to stop \"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:18.512280 env[1219]: time="2024-07-02T07:51:18.512219238Z" level=info msg="Container to stop \"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:18.512280 env[1219]: time="2024-07-02T07:51:18.512237467Z" level=info msg="Container to stop \"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:18.515438 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557-shm.mount: Deactivated successfully. Jul 2 07:51:18.524322 systemd[1]: cri-containerd-2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557.scope: Deactivated successfully. Jul 2 07:51:18.550647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557-rootfs.mount: Deactivated successfully. Jul 2 07:51:18.555934 env[1219]: time="2024-07-02T07:51:18.555848565Z" level=info msg="shim disconnected" id=2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557 Jul 2 07:51:18.556204 env[1219]: time="2024-07-02T07:51:18.556176754Z" level=warning msg="cleaning up after shim disconnected" id=2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557 namespace=k8s.io Jul 2 07:51:18.556327 env[1219]: time="2024-07-02T07:51:18.556304190Z" level=info msg="cleaning up dead shim" Jul 2 07:51:18.567358 env[1219]: time="2024-07-02T07:51:18.567311326Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3061 runtime=io.containerd.runc.v2\n" Jul 2 07:51:18.567740 env[1219]: time="2024-07-02T07:51:18.567702202Z" level=info msg="TearDown network for sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" successfully" Jul 2 07:51:18.567852 env[1219]: time="2024-07-02T07:51:18.567739714Z" level=info msg="StopPodSandbox for \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" returns successfully" Jul 2 07:51:18.599889 kubelet[1539]: I0702 07:51:18.598195 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cni-path\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.599889 kubelet[1539]: I0702 07:51:18.598254 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-clustermesh-secrets\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.599889 kubelet[1539]: I0702 07:51:18.598258 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cni-path" (OuterVolumeSpecName: "cni-path") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:18.599889 kubelet[1539]: I0702 07:51:18.598291 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-host-proc-sys-kernel\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.599889 kubelet[1539]: I0702 07:51:18.598321 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-lib-modules\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.599889 kubelet[1539]: I0702 07:51:18.598353 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-cgroup\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.600376 kubelet[1539]: I0702 07:51:18.598383 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-xtables-lock\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.600376 kubelet[1539]: I0702 07:51:18.598411 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-host-proc-sys-net\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.600376 kubelet[1539]: I0702 07:51:18.598440 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-bpf-maps\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.600376 kubelet[1539]: I0702 07:51:18.598484 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-hostproc\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.600376 kubelet[1539]: I0702 07:51:18.598521 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhc8r\" (UniqueName: \"kubernetes.io/projected/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-kube-api-access-rhc8r\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.600376 kubelet[1539]: I0702 07:51:18.598550 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-etc-cni-netd\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.600707 kubelet[1539]: I0702 07:51:18.598582 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-hubble-tls\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.600707 kubelet[1539]: I0702 07:51:18.598615 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-run\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.600707 kubelet[1539]: I0702 07:51:18.598648 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-config-path\") pod \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\" (UID: \"d2d465cd-b932-4417-a9e5-b3042d8a5ebe\") " Jul 2 07:51:18.600707 kubelet[1539]: I0702 07:51:18.598688 1539 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cni-path\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.600707 kubelet[1539]: I0702 07:51:18.599197 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:18.600707 kubelet[1539]: I0702 07:51:18.599257 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:18.601080 kubelet[1539]: I0702 07:51:18.599295 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:18.601080 kubelet[1539]: I0702 07:51:18.599324 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:18.601080 kubelet[1539]: I0702 07:51:18.599350 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:18.601080 kubelet[1539]: I0702 07:51:18.599376 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:18.601080 kubelet[1539]: I0702 07:51:18.599404 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:18.601360 kubelet[1539]: I0702 07:51:18.599431 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-hostproc" (OuterVolumeSpecName: "hostproc") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:18.601893 kubelet[1539]: I0702 07:51:18.601841 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:18.603891 kubelet[1539]: I0702 07:51:18.603817 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:51:18.608761 systemd[1]: var-lib-kubelet-pods-d2d465cd\x2db932\x2d4417\x2da9e5\x2db3042d8a5ebe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:51:18.610915 kubelet[1539]: I0702 07:51:18.610882 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:51:18.614770 systemd[1]: var-lib-kubelet-pods-d2d465cd\x2db932\x2d4417\x2da9e5\x2db3042d8a5ebe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:51:18.616510 kubelet[1539]: I0702 07:51:18.616480 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:51:18.619956 kubelet[1539]: I0702 07:51:18.617747 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-kube-api-access-rhc8r" (OuterVolumeSpecName: "kube-api-access-rhc8r") pod "d2d465cd-b932-4417-a9e5-b3042d8a5ebe" (UID: "d2d465cd-b932-4417-a9e5-b3042d8a5ebe"). InnerVolumeSpecName "kube-api-access-rhc8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:51:18.620449 systemd[1]: var-lib-kubelet-pods-d2d465cd\x2db932\x2d4417\x2da9e5\x2db3042d8a5ebe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drhc8r.mount: Deactivated successfully. Jul 2 07:51:18.699158 kubelet[1539]: I0702 07:51:18.699089 1539 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-clustermesh-secrets\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699158 kubelet[1539]: I0702 07:51:18.699141 1539 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-lib-modules\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699158 kubelet[1539]: I0702 07:51:18.699160 1539 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-host-proc-sys-kernel\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699456 kubelet[1539]: I0702 07:51:18.699176 1539 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-hostproc\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699456 kubelet[1539]: I0702 07:51:18.699195 1539 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rhc8r\" (UniqueName: \"kubernetes.io/projected/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-kube-api-access-rhc8r\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699456 kubelet[1539]: I0702 07:51:18.699210 1539 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-cgroup\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699456 kubelet[1539]: I0702 07:51:18.699223 1539 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-xtables-lock\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699456 kubelet[1539]: I0702 07:51:18.699241 1539 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-host-proc-sys-net\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699456 kubelet[1539]: I0702 07:51:18.699255 1539 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-bpf-maps\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699456 kubelet[1539]: I0702 07:51:18.699270 1539 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-config-path\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699456 kubelet[1539]: I0702 07:51:18.699285 1539 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-etc-cni-netd\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699717 kubelet[1539]: I0702 07:51:18.699300 1539 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-hubble-tls\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:18.699717 kubelet[1539]: I0702 07:51:18.699315 1539 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2d465cd-b932-4417-a9e5-b3042d8a5ebe-cilium-run\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:19.045325 kubelet[1539]: E0702 07:51:19.045259 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:19.460940 kubelet[1539]: I0702 07:51:19.460618 1539 scope.go:117] "RemoveContainer" containerID="59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e" Jul 2 07:51:19.463910 env[1219]: time="2024-07-02T07:51:19.463375619Z" level=info msg="RemoveContainer for \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\"" Jul 2 07:51:19.466480 systemd[1]: Removed slice kubepods-burstable-podd2d465cd_b932_4417_a9e5_b3042d8a5ebe.slice. Jul 2 07:51:19.466684 systemd[1]: kubepods-burstable-podd2d465cd_b932_4417_a9e5_b3042d8a5ebe.slice: Consumed 8.731s CPU time. Jul 2 07:51:19.468459 env[1219]: time="2024-07-02T07:51:19.468388405Z" level=info msg="RemoveContainer for \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\" returns successfully" Jul 2 07:51:19.468787 kubelet[1539]: I0702 07:51:19.468766 1539 scope.go:117] "RemoveContainer" containerID="6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb" Jul 2 07:51:19.470608 env[1219]: time="2024-07-02T07:51:19.470563998Z" level=info msg="RemoveContainer for \"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb\"" Jul 2 07:51:19.474133 env[1219]: time="2024-07-02T07:51:19.474093743Z" level=info msg="RemoveContainer for \"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb\" returns successfully" Jul 2 07:51:19.474322 kubelet[1539]: I0702 07:51:19.474292 1539 scope.go:117] "RemoveContainer" containerID="eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89" Jul 2 07:51:19.475734 env[1219]: time="2024-07-02T07:51:19.475686633Z" level=info msg="RemoveContainer for \"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89\"" Jul 2 07:51:19.479112 env[1219]: time="2024-07-02T07:51:19.479075066Z" level=info msg="RemoveContainer for \"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89\" returns successfully" Jul 2 07:51:19.479385 kubelet[1539]: I0702 07:51:19.479339 1539 scope.go:117] "RemoveContainer" containerID="108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86" Jul 2 07:51:19.481103 env[1219]: time="2024-07-02T07:51:19.480763068Z" level=info msg="RemoveContainer for \"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86\"" Jul 2 07:51:19.484110 env[1219]: time="2024-07-02T07:51:19.484073792Z" level=info msg="RemoveContainer for \"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86\" returns successfully" Jul 2 07:51:19.484281 kubelet[1539]: I0702 07:51:19.484254 1539 scope.go:117] "RemoveContainer" containerID="05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4" Jul 2 07:51:19.485984 env[1219]: time="2024-07-02T07:51:19.485907123Z" level=info msg="RemoveContainer for \"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4\"" Jul 2 07:51:19.489347 env[1219]: time="2024-07-02T07:51:19.489309188Z" level=info msg="RemoveContainer for \"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4\" returns successfully" Jul 2 07:51:19.489510 kubelet[1539]: I0702 07:51:19.489493 1539 scope.go:117] "RemoveContainer" containerID="59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e" Jul 2 07:51:19.489818 env[1219]: time="2024-07-02T07:51:19.489731467Z" level=error msg="ContainerStatus for \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\": not found" Jul 2 07:51:19.490009 kubelet[1539]: E0702 07:51:19.489987 1539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\": not found" containerID="59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e" Jul 2 07:51:19.490120 kubelet[1539]: I0702 07:51:19.490105 1539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e"} err="failed to get container status \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\": rpc error: code = NotFound desc = an error occurred when try to find container \"59efafd24d4afbd9307f8bd86ca4f6e0fad0583714da6ed72a057cb3f8492c7e\": not found" Jul 2 07:51:19.490189 kubelet[1539]: I0702 07:51:19.490128 1539 scope.go:117] "RemoveContainer" containerID="6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb" Jul 2 07:51:19.490454 env[1219]: time="2024-07-02T07:51:19.490354344Z" level=error msg="ContainerStatus for \"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb\": not found" Jul 2 07:51:19.490698 kubelet[1539]: E0702 07:51:19.490678 1539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb\": not found" containerID="6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb" Jul 2 07:51:19.490860 kubelet[1539]: I0702 07:51:19.490842 1539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb"} err="failed to get container status \"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ddf4b1cb1d20e3695873e1d0bf51d889f7c7a820ad459664c44847adbfcb5cb\": not found" Jul 2 07:51:19.490971 kubelet[1539]: I0702 07:51:19.490904 1539 scope.go:117] "RemoveContainer" containerID="eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89" Jul 2 07:51:19.491272 env[1219]: time="2024-07-02T07:51:19.491207005Z" level=error msg="ContainerStatus for \"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89\": not found" Jul 2 07:51:19.491414 kubelet[1539]: E0702 07:51:19.491392 1539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89\": not found" containerID="eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89" Jul 2 07:51:19.491511 kubelet[1539]: I0702 07:51:19.491435 1539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89"} err="failed to get container status \"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89\": rpc error: code = NotFound desc = an error occurred when try to find container \"eeece0e5b36315f110b706039adb98490de106cc2eb418138d2f8b2bb0bcce89\": not found" Jul 2 07:51:19.491511 kubelet[1539]: I0702 07:51:19.491453 1539 scope.go:117] "RemoveContainer" containerID="108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86" Jul 2 07:51:19.491755 env[1219]: time="2024-07-02T07:51:19.491670922Z" level=error msg="ContainerStatus for \"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86\": not found" Jul 2 07:51:19.491922 kubelet[1539]: E0702 07:51:19.491899 1539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86\": not found" containerID="108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86" Jul 2 07:51:19.492037 kubelet[1539]: I0702 07:51:19.491940 1539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86"} err="failed to get container status \"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86\": rpc error: code = NotFound desc = an error occurred when try to find container \"108266ab5c03393859e62b4b7bac28b5784f9352f4742216dbc381c5fd181f86\": not found" Jul 2 07:51:19.492037 kubelet[1539]: I0702 07:51:19.491957 1539 scope.go:117] "RemoveContainer" containerID="05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4" Jul 2 07:51:19.492249 env[1219]: time="2024-07-02T07:51:19.492182240Z" level=error msg="ContainerStatus for \"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4\": not found" Jul 2 07:51:19.492415 kubelet[1539]: E0702 07:51:19.492393 1539 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4\": not found" containerID="05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4" Jul 2 07:51:19.492520 kubelet[1539]: I0702 07:51:19.492434 1539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4"} err="failed to get container status \"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"05dd8080d02a6e16cd63be0fd86d742b2b2d98846755b92776891c82a70087f4\": not found" Jul 2 07:51:20.046162 kubelet[1539]: E0702 07:51:20.046118 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:20.182992 kubelet[1539]: I0702 07:51:20.182927 1539 topology_manager.go:215] "Topology Admit Handler" podUID="8e5f577d-8eb1-4580-ba95-b623da985005" podNamespace="kube-system" podName="cilium-operator-5cc964979-dmhqn" Jul 2 07:51:20.183224 kubelet[1539]: E0702 07:51:20.183033 1539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2d465cd-b932-4417-a9e5-b3042d8a5ebe" containerName="mount-bpf-fs" Jul 2 07:51:20.183224 kubelet[1539]: E0702 07:51:20.183052 1539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2d465cd-b932-4417-a9e5-b3042d8a5ebe" containerName="clean-cilium-state" Jul 2 07:51:20.183224 kubelet[1539]: E0702 07:51:20.183063 1539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2d465cd-b932-4417-a9e5-b3042d8a5ebe" containerName="apply-sysctl-overwrites" Jul 2 07:51:20.183224 kubelet[1539]: E0702 07:51:20.183074 1539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2d465cd-b932-4417-a9e5-b3042d8a5ebe" containerName="cilium-agent" Jul 2 07:51:20.183224 kubelet[1539]: E0702 07:51:20.183085 1539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2d465cd-b932-4417-a9e5-b3042d8a5ebe" containerName="mount-cgroup" Jul 2 07:51:20.183224 kubelet[1539]: I0702 07:51:20.183112 1539 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d465cd-b932-4417-a9e5-b3042d8a5ebe" containerName="cilium-agent" Jul 2 07:51:20.191688 systemd[1]: Created slice kubepods-besteffort-pod8e5f577d_8eb1_4580_ba95_b623da985005.slice. Jul 2 07:51:20.196886 kubelet[1539]: I0702 07:51:20.196837 1539 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d2d465cd-b932-4417-a9e5-b3042d8a5ebe" path="/var/lib/kubelet/pods/d2d465cd-b932-4417-a9e5-b3042d8a5ebe/volumes" Jul 2 07:51:20.207623 kubelet[1539]: I0702 07:51:20.207595 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e5f577d-8eb1-4580-ba95-b623da985005-cilium-config-path\") pod \"cilium-operator-5cc964979-dmhqn\" (UID: \"8e5f577d-8eb1-4580-ba95-b623da985005\") " pod="kube-system/cilium-operator-5cc964979-dmhqn" Jul 2 07:51:20.207824 kubelet[1539]: I0702 07:51:20.207799 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn8hg\" (UniqueName: \"kubernetes.io/projected/8e5f577d-8eb1-4580-ba95-b623da985005-kube-api-access-tn8hg\") pod \"cilium-operator-5cc964979-dmhqn\" (UID: \"8e5f577d-8eb1-4580-ba95-b623da985005\") " pod="kube-system/cilium-operator-5cc964979-dmhqn" Jul 2 07:51:20.243081 kubelet[1539]: I0702 07:51:20.243039 1539 topology_manager.go:215] "Topology Admit Handler" podUID="0495d4bf-8b9a-4e44-9cfc-66c5a6004068" podNamespace="kube-system" podName="cilium-4dzv9" Jul 2 07:51:20.249741 systemd[1]: Created slice kubepods-burstable-pod0495d4bf_8b9a_4e44_9cfc_66c5a6004068.slice. Jul 2 07:51:20.309093 kubelet[1539]: I0702 07:51:20.308105 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-ipsec-secrets\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309093 kubelet[1539]: I0702 07:51:20.308186 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cni-path\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309093 kubelet[1539]: I0702 07:51:20.308224 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-host-proc-sys-kernel\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309093 kubelet[1539]: I0702 07:51:20.308278 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqkdq\" (UniqueName: \"kubernetes.io/projected/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-kube-api-access-xqkdq\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309093 kubelet[1539]: I0702 07:51:20.308309 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-run\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309093 kubelet[1539]: I0702 07:51:20.308361 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-hostproc\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309573 kubelet[1539]: I0702 07:51:20.308415 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-clustermesh-secrets\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309573 kubelet[1539]: I0702 07:51:20.308448 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-hubble-tls\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309573 kubelet[1539]: I0702 07:51:20.308507 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-config-path\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309573 kubelet[1539]: I0702 07:51:20.308549 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-xtables-lock\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309573 kubelet[1539]: I0702 07:51:20.308639 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-bpf-maps\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309573 kubelet[1539]: I0702 07:51:20.308690 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-cgroup\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309932 kubelet[1539]: I0702 07:51:20.308741 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-host-proc-sys-net\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309932 kubelet[1539]: I0702 07:51:20.308777 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-lib-modules\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.309932 kubelet[1539]: I0702 07:51:20.308827 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-etc-cni-netd\") pod \"cilium-4dzv9\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " pod="kube-system/cilium-4dzv9" Jul 2 07:51:20.496994 env[1219]: time="2024-07-02T07:51:20.496924391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dmhqn,Uid:8e5f577d-8eb1-4580-ba95-b623da985005,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:20.514915 env[1219]: time="2024-07-02T07:51:20.514789419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:51:20.514915 env[1219]: time="2024-07-02T07:51:20.514861107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:51:20.515271 env[1219]: time="2024-07-02T07:51:20.515215657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:51:20.515945 env[1219]: time="2024-07-02T07:51:20.515793796Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1d104a0a7642333736772d394c7a2cda33d285d746c484611b6c9835eafe3f7 pid=3090 runtime=io.containerd.runc.v2 Jul 2 07:51:20.532973 systemd[1]: Started cri-containerd-e1d104a0a7642333736772d394c7a2cda33d285d746c484611b6c9835eafe3f7.scope. Jul 2 07:51:20.556778 env[1219]: time="2024-07-02T07:51:20.556726950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dzv9,Uid:0495d4bf-8b9a-4e44-9cfc-66c5a6004068,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:20.576807 env[1219]: time="2024-07-02T07:51:20.576665920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:51:20.577052 env[1219]: time="2024-07-02T07:51:20.577012050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:51:20.577221 env[1219]: time="2024-07-02T07:51:20.577187869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:51:20.577522 env[1219]: time="2024-07-02T07:51:20.577483839Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9 pid=3125 runtime=io.containerd.runc.v2 Jul 2 07:51:20.608076 systemd[1]: Started cri-containerd-5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9.scope. Jul 2 07:51:20.612964 env[1219]: time="2024-07-02T07:51:20.610948056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dmhqn,Uid:8e5f577d-8eb1-4580-ba95-b623da985005,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1d104a0a7642333736772d394c7a2cda33d285d746c484611b6c9835eafe3f7\"" Jul 2 07:51:20.617246 env[1219]: time="2024-07-02T07:51:20.617194935Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:51:20.643848 env[1219]: time="2024-07-02T07:51:20.642660082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dzv9,Uid:0495d4bf-8b9a-4e44-9cfc-66c5a6004068,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\"" Jul 2 07:51:20.646333 env[1219]: time="2024-07-02T07:51:20.646287047Z" level=info msg="CreateContainer within sandbox \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:51:20.659967 env[1219]: time="2024-07-02T07:51:20.659914020Z" level=info msg="CreateContainer within sandbox \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c\"" Jul 2 07:51:20.660765 env[1219]: time="2024-07-02T07:51:20.660660044Z" level=info msg="StartContainer for \"f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c\"" Jul 2 07:51:20.682408 systemd[1]: Started cri-containerd-f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c.scope. Jul 2 07:51:20.699199 systemd[1]: cri-containerd-f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c.scope: Deactivated successfully. Jul 2 07:51:20.712463 env[1219]: time="2024-07-02T07:51:20.712400302Z" level=info msg="shim disconnected" id=f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c Jul 2 07:51:20.712463 env[1219]: time="2024-07-02T07:51:20.712464689Z" level=warning msg="cleaning up after shim disconnected" id=f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c namespace=k8s.io Jul 2 07:51:20.712463 env[1219]: time="2024-07-02T07:51:20.712479489Z" level=info msg="cleaning up dead shim" Jul 2 07:51:20.724708 env[1219]: time="2024-07-02T07:51:20.724649590Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3191 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:51:20Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 07:51:20.725320 env[1219]: time="2024-07-02T07:51:20.725179691Z" level=error msg="copy shim log" error="read /proc/self/fd/66: file already closed" Jul 2 07:51:20.725808 env[1219]: time="2024-07-02T07:51:20.725514647Z" level=error msg="Failed to pipe stderr of container \"f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c\"" error="reading from a closed fifo" Jul 2 07:51:20.725959 env[1219]: time="2024-07-02T07:51:20.725540303Z" level=error msg="Failed to pipe stdout of container \"f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c\"" error="reading from a closed fifo" Jul 2 07:51:20.727423 env[1219]: time="2024-07-02T07:51:20.727357746Z" level=error msg="StartContainer for \"f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 07:51:20.727691 kubelet[1539]: E0702 07:51:20.727665 1539 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c" Jul 2 07:51:20.729770 kubelet[1539]: E0702 07:51:20.729735 1539 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 07:51:20.729770 kubelet[1539]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 07:51:20.729770 kubelet[1539]: rm /hostbin/cilium-mount Jul 2 07:51:20.730060 kubelet[1539]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xqkdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-4dzv9_kube-system(0495d4bf-8b9a-4e44-9cfc-66c5a6004068): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 07:51:20.730060 kubelet[1539]: E0702 07:51:20.729831 1539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4dzv9" podUID="0495d4bf-8b9a-4e44-9cfc-66c5a6004068" Jul 2 07:51:21.046590 kubelet[1539]: E0702 07:51:21.046534 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:21.484853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739675663.mount: Deactivated successfully. Jul 2 07:51:21.492182 env[1219]: time="2024-07-02T07:51:21.492136791Z" level=info msg="StopPodSandbox for \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\"" Jul 2 07:51:21.495293 env[1219]: time="2024-07-02T07:51:21.492216099Z" level=info msg="Container to stop \"f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:51:21.495351 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9-shm.mount: Deactivated successfully. Jul 2 07:51:21.517746 systemd[1]: cri-containerd-5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9.scope: Deactivated successfully. Jul 2 07:51:21.565957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9-rootfs.mount: Deactivated successfully. Jul 2 07:51:21.581957 env[1219]: time="2024-07-02T07:51:21.581831901Z" level=info msg="shim disconnected" id=5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9 Jul 2 07:51:21.581957 env[1219]: time="2024-07-02T07:51:21.581924948Z" level=warning msg="cleaning up after shim disconnected" id=5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9 namespace=k8s.io Jul 2 07:51:21.581957 env[1219]: time="2024-07-02T07:51:21.581944045Z" level=info msg="cleaning up dead shim" Jul 2 07:51:21.604074 env[1219]: time="2024-07-02T07:51:21.604029522Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3223 runtime=io.containerd.runc.v2\n" Jul 2 07:51:21.604639 env[1219]: time="2024-07-02T07:51:21.604599567Z" level=info msg="TearDown network for sandbox \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\" successfully" Jul 2 07:51:21.604788 env[1219]: time="2024-07-02T07:51:21.604761903Z" level=info msg="StopPodSandbox for \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\" returns successfully" Jul 2 07:51:21.720100 kubelet[1539]: I0702 07:51:21.720057 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-config-path\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.720435 kubelet[1539]: I0702 07:51:21.720408 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-host-proc-sys-kernel\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.720597 kubelet[1539]: I0702 07:51:21.720582 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-hubble-tls\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.720747 kubelet[1539]: I0702 07:51:21.720732 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-host-proc-sys-net\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.720931 kubelet[1539]: I0702 07:51:21.720917 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-etc-cni-netd\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.721092 kubelet[1539]: I0702 07:51:21.721078 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-clustermesh-secrets\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.721241 kubelet[1539]: I0702 07:51:21.721228 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cni-path\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.721394 kubelet[1539]: I0702 07:51:21.721380 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-cgroup\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.721542 kubelet[1539]: I0702 07:51:21.721528 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-ipsec-secrets\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.721700 kubelet[1539]: I0702 07:51:21.721686 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqkdq\" (UniqueName: \"kubernetes.io/projected/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-kube-api-access-xqkdq\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.721842 kubelet[1539]: I0702 07:51:21.721829 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-hostproc\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.721997 kubelet[1539]: I0702 07:51:21.721983 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-xtables-lock\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.722142 kubelet[1539]: I0702 07:51:21.722128 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-lib-modules\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.722286 kubelet[1539]: I0702 07:51:21.722272 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-run\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.722441 kubelet[1539]: I0702 07:51:21.722426 1539 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-bpf-maps\") pod \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\" (UID: \"0495d4bf-8b9a-4e44-9cfc-66c5a6004068\") " Jul 2 07:51:21.722652 kubelet[1539]: I0702 07:51:21.722631 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:21.725799 kubelet[1539]: I0702 07:51:21.725761 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:21.726897 kubelet[1539]: I0702 07:51:21.726503 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:51:21.727094 kubelet[1539]: I0702 07:51:21.727049 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:21.735033 kubelet[1539]: I0702 07:51:21.735002 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:51:21.737999 kubelet[1539]: I0702 07:51:21.735555 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:21.738453 kubelet[1539]: I0702 07:51:21.735580 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:21.738618 kubelet[1539]: I0702 07:51:21.736256 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cni-path" (OuterVolumeSpecName: "cni-path") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:21.738769 kubelet[1539]: I0702 07:51:21.738191 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-hostproc" (OuterVolumeSpecName: "hostproc") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:21.738948 kubelet[1539]: I0702 07:51:21.738218 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:21.739073 kubelet[1539]: I0702 07:51:21.738249 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:21.739229 kubelet[1539]: I0702 07:51:21.738268 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:51:21.739353 kubelet[1539]: I0702 07:51:21.738385 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:51:21.743681 kubelet[1539]: I0702 07:51:21.743648 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-kube-api-access-xqkdq" (OuterVolumeSpecName: "kube-api-access-xqkdq") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "kube-api-access-xqkdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:51:21.746910 kubelet[1539]: I0702 07:51:21.744000 1539 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0495d4bf-8b9a-4e44-9cfc-66c5a6004068" (UID: "0495d4bf-8b9a-4e44-9cfc-66c5a6004068"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:51:21.823436 kubelet[1539]: I0702 07:51:21.823389 1539 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-config-path\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.823691 kubelet[1539]: I0702 07:51:21.823673 1539 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-host-proc-sys-kernel\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.823837 kubelet[1539]: I0702 07:51:21.823822 1539 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-hubble-tls\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.823996 kubelet[1539]: I0702 07:51:21.823983 1539 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-host-proc-sys-net\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.824135 kubelet[1539]: I0702 07:51:21.824122 1539 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-etc-cni-netd\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.824264 kubelet[1539]: I0702 07:51:21.824251 1539 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-clustermesh-secrets\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.824391 kubelet[1539]: I0702 07:51:21.824377 1539 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cni-path\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.824523 kubelet[1539]: I0702 07:51:21.824510 1539 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-cgroup\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.824654 kubelet[1539]: I0702 07:51:21.824642 1539 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-ipsec-secrets\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.824780 kubelet[1539]: I0702 07:51:21.824767 1539 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xqkdq\" (UniqueName: \"kubernetes.io/projected/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-kube-api-access-xqkdq\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.824919 kubelet[1539]: I0702 07:51:21.824907 1539 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-hostproc\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.825051 kubelet[1539]: I0702 07:51:21.825038 1539 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-xtables-lock\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.825188 kubelet[1539]: I0702 07:51:21.825175 1539 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-lib-modules\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.825316 kubelet[1539]: I0702 07:51:21.825304 1539 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-cilium-run\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:21.825452 kubelet[1539]: I0702 07:51:21.825440 1539 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0495d4bf-8b9a-4e44-9cfc-66c5a6004068-bpf-maps\") on node \"10.128.0.56\" DevicePath \"\"" Jul 2 07:51:22.000491 kubelet[1539]: E0702 07:51:22.000362 1539 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:22.047052 kubelet[1539]: E0702 07:51:22.047008 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:22.050954 kubelet[1539]: I0702 07:51:22.050928 1539 scope.go:117] "RemoveContainer" containerID="f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c" Jul 2 07:51:22.052405 env[1219]: time="2024-07-02T07:51:22.052365961Z" level=info msg="RemoveContainer for \"f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c\"" Jul 2 07:51:22.056536 env[1219]: time="2024-07-02T07:51:22.056496566Z" level=info msg="RemoveContainer for \"f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c\" returns successfully" Jul 2 07:51:22.058044 env[1219]: time="2024-07-02T07:51:22.058010992Z" level=info msg="StopPodSandbox for \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\"" Jul 2 07:51:22.058306 env[1219]: time="2024-07-02T07:51:22.058249685Z" level=info msg="TearDown network for sandbox \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\" successfully" Jul 2 07:51:22.058425 env[1219]: time="2024-07-02T07:51:22.058399143Z" level=info msg="StopPodSandbox for \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\" returns successfully" Jul 2 07:51:22.059004 env[1219]: time="2024-07-02T07:51:22.058973649Z" level=info msg="RemovePodSandbox for \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\"" Jul 2 07:51:22.059243 env[1219]: time="2024-07-02T07:51:22.059189745Z" level=info msg="Forcibly stopping sandbox \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\"" Jul 2 07:51:22.059433 env[1219]: time="2024-07-02T07:51:22.059405481Z" level=info msg="TearDown network for sandbox \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\" successfully" Jul 2 07:51:22.063171 env[1219]: time="2024-07-02T07:51:22.063136035Z" level=info msg="RemovePodSandbox \"5a858bda4ae58c0fe26ee3423e07f2ab974f32d9a082a77561b22cc689df1ad9\" returns successfully" Jul 2 07:51:22.063735 env[1219]: time="2024-07-02T07:51:22.063705208Z" level=info msg="StopPodSandbox for \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\"" Jul 2 07:51:22.063989 env[1219]: time="2024-07-02T07:51:22.063938407Z" level=info msg="TearDown network for sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" successfully" Jul 2 07:51:22.064126 env[1219]: time="2024-07-02T07:51:22.064098929Z" level=info msg="StopPodSandbox for \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" returns successfully" Jul 2 07:51:22.064584 env[1219]: time="2024-07-02T07:51:22.064555409Z" level=info msg="RemovePodSandbox for \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\"" Jul 2 07:51:22.064747 env[1219]: time="2024-07-02T07:51:22.064702837Z" level=info msg="Forcibly stopping sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\"" Jul 2 07:51:22.064949 env[1219]: time="2024-07-02T07:51:22.064921219Z" level=info msg="TearDown network for sandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" successfully" Jul 2 07:51:22.068817 env[1219]: time="2024-07-02T07:51:22.068781616Z" level=info msg="RemovePodSandbox \"2c0c71894893924e197cb2b5aa6b15ba0f99c416c8b0c720ad941dd601929557\" returns successfully" Jul 2 07:51:22.128163 kubelet[1539]: E0702 07:51:22.128062 1539 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:51:22.197380 systemd[1]: Removed slice kubepods-burstable-pod0495d4bf_8b9a_4e44_9cfc_66c5a6004068.slice. Jul 2 07:51:22.324491 systemd[1]: var-lib-kubelet-pods-0495d4bf\x2d8b9a\x2d4e44\x2d9cfc\x2d66c5a6004068-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxqkdq.mount: Deactivated successfully. Jul 2 07:51:22.324631 systemd[1]: var-lib-kubelet-pods-0495d4bf\x2d8b9a\x2d4e44\x2d9cfc\x2d66c5a6004068-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:51:22.324732 systemd[1]: var-lib-kubelet-pods-0495d4bf\x2d8b9a\x2d4e44\x2d9cfc\x2d66c5a6004068-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:51:22.324825 systemd[1]: var-lib-kubelet-pods-0495d4bf\x2d8b9a\x2d4e44\x2d9cfc\x2d66c5a6004068-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 07:51:22.389942 env[1219]: time="2024-07-02T07:51:22.389853815Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:22.391982 env[1219]: time="2024-07-02T07:51:22.391935965Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:22.394138 env[1219]: time="2024-07-02T07:51:22.394088984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:22.394776 env[1219]: time="2024-07-02T07:51:22.394733216Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:51:22.397367 env[1219]: time="2024-07-02T07:51:22.397325647Z" level=info msg="CreateContainer within sandbox \"e1d104a0a7642333736772d394c7a2cda33d285d746c484611b6c9835eafe3f7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:51:22.414942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1069232661.mount: Deactivated successfully. Jul 2 07:51:22.422944 env[1219]: time="2024-07-02T07:51:22.422891760Z" level=info msg="CreateContainer within sandbox \"e1d104a0a7642333736772d394c7a2cda33d285d746c484611b6c9835eafe3f7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ea343ebdb80de00e1f388d25414da007d4e8afcef588a1e46f0a6d3f2b187eb1\"" Jul 2 07:51:22.423698 env[1219]: time="2024-07-02T07:51:22.423652610Z" level=info msg="StartContainer for \"ea343ebdb80de00e1f388d25414da007d4e8afcef588a1e46f0a6d3f2b187eb1\"" Jul 2 07:51:22.451766 systemd[1]: Started cri-containerd-ea343ebdb80de00e1f388d25414da007d4e8afcef588a1e46f0a6d3f2b187eb1.scope. Jul 2 07:51:22.487438 env[1219]: time="2024-07-02T07:51:22.487296928Z" level=info msg="StartContainer for \"ea343ebdb80de00e1f388d25414da007d4e8afcef588a1e46f0a6d3f2b187eb1\" returns successfully" Jul 2 07:51:22.541955 kubelet[1539]: I0702 07:51:22.541916 1539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-dmhqn" podStartSLOduration=0.763129717 podStartE2EDuration="2.541849906s" podCreationTimestamp="2024-07-02 07:51:20 +0000 UTC" firstStartedPulling="2024-07-02 07:51:20.616396864 +0000 UTC m=+59.723481327" lastFinishedPulling="2024-07-02 07:51:22.39511706 +0000 UTC m=+61.502201516" observedRunningTime="2024-07-02 07:51:22.520205119 +0000 UTC m=+61.627289591" watchObservedRunningTime="2024-07-02 07:51:22.541849906 +0000 UTC m=+61.648934377" Jul 2 07:51:22.557157 kubelet[1539]: I0702 07:51:22.557111 1539 topology_manager.go:215] "Topology Admit Handler" podUID="e6e6667f-306e-47bb-8d44-efcfc40c4bd8" podNamespace="kube-system" podName="cilium-st9v8" Jul 2 07:51:22.557321 kubelet[1539]: E0702 07:51:22.557177 1539 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0495d4bf-8b9a-4e44-9cfc-66c5a6004068" containerName="mount-cgroup" Jul 2 07:51:22.557321 kubelet[1539]: I0702 07:51:22.557214 1539 memory_manager.go:354] "RemoveStaleState removing state" podUID="0495d4bf-8b9a-4e44-9cfc-66c5a6004068" containerName="mount-cgroup" Jul 2 07:51:22.564413 systemd[1]: Created slice kubepods-burstable-pode6e6667f_306e_47bb_8d44_efcfc40c4bd8.slice. Jul 2 07:51:22.630524 kubelet[1539]: I0702 07:51:22.630391 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-host-proc-sys-net\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.630524 kubelet[1539]: I0702 07:51:22.630471 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-host-proc-sys-kernel\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.630524 kubelet[1539]: I0702 07:51:22.630511 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-cni-path\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.630844 kubelet[1539]: I0702 07:51:22.630562 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-etc-cni-netd\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.630844 kubelet[1539]: I0702 07:51:22.630597 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-hostproc\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.630844 kubelet[1539]: I0702 07:51:22.630726 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-cilium-config-path\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.631047 kubelet[1539]: I0702 07:51:22.630884 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4jlm\" (UniqueName: \"kubernetes.io/projected/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-kube-api-access-r4jlm\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.631047 kubelet[1539]: I0702 07:51:22.630930 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-xtables-lock\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.631181 kubelet[1539]: I0702 07:51:22.631074 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-cilium-run\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.631181 kubelet[1539]: I0702 07:51:22.631162 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-lib-modules\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.631297 kubelet[1539]: I0702 07:51:22.631245 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-hubble-tls\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.631355 kubelet[1539]: I0702 07:51:22.631332 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-clustermesh-secrets\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.631433 kubelet[1539]: I0702 07:51:22.631420 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-cilium-cgroup\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.631533 kubelet[1539]: I0702 07:51:22.631514 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-cilium-ipsec-secrets\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.631677 kubelet[1539]: I0702 07:51:22.631658 1539 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6e6667f-306e-47bb-8d44-efcfc40c4bd8-bpf-maps\") pod \"cilium-st9v8\" (UID: \"e6e6667f-306e-47bb-8d44-efcfc40c4bd8\") " pod="kube-system/cilium-st9v8" Jul 2 07:51:22.872474 env[1219]: time="2024-07-02T07:51:22.872416102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-st9v8,Uid:e6e6667f-306e-47bb-8d44-efcfc40c4bd8,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:22.890505 env[1219]: time="2024-07-02T07:51:22.890347415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:51:22.890505 env[1219]: time="2024-07-02T07:51:22.890400185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:51:22.890963 env[1219]: time="2024-07-02T07:51:22.890418578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:51:22.891372 env[1219]: time="2024-07-02T07:51:22.891311286Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8 pid=3293 runtime=io.containerd.runc.v2 Jul 2 07:51:22.908729 systemd[1]: Started cri-containerd-626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8.scope. Jul 2 07:51:22.939223 env[1219]: time="2024-07-02T07:51:22.939174231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-st9v8,Uid:e6e6667f-306e-47bb-8d44-efcfc40c4bd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8\"" Jul 2 07:51:22.943505 env[1219]: time="2024-07-02T07:51:22.943465474Z" level=info msg="CreateContainer within sandbox \"626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:51:22.957160 env[1219]: time="2024-07-02T07:51:22.957125350Z" level=info msg="CreateContainer within sandbox \"626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8c293bbcebde270851c6d68415697e56b6f78b0ec429a0d83ad4c0a63d0f9638\"" Jul 2 07:51:22.957895 env[1219]: time="2024-07-02T07:51:22.957851542Z" level=info msg="StartContainer for \"8c293bbcebde270851c6d68415697e56b6f78b0ec429a0d83ad4c0a63d0f9638\"" Jul 2 07:51:22.978267 systemd[1]: Started cri-containerd-8c293bbcebde270851c6d68415697e56b6f78b0ec429a0d83ad4c0a63d0f9638.scope. Jul 2 07:51:23.011651 env[1219]: time="2024-07-02T07:51:23.011597132Z" level=info msg="StartContainer for \"8c293bbcebde270851c6d68415697e56b6f78b0ec429a0d83ad4c0a63d0f9638\" returns successfully" Jul 2 07:51:23.022031 systemd[1]: cri-containerd-8c293bbcebde270851c6d68415697e56b6f78b0ec429a0d83ad4c0a63d0f9638.scope: Deactivated successfully. Jul 2 07:51:23.051362 kubelet[1539]: E0702 07:51:23.051311 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:23.225308 env[1219]: time="2024-07-02T07:51:23.225158246Z" level=info msg="shim disconnected" id=8c293bbcebde270851c6d68415697e56b6f78b0ec429a0d83ad4c0a63d0f9638 Jul 2 07:51:23.225308 env[1219]: time="2024-07-02T07:51:23.225220405Z" level=warning msg="cleaning up after shim disconnected" id=8c293bbcebde270851c6d68415697e56b6f78b0ec429a0d83ad4c0a63d0f9638 namespace=k8s.io Jul 2 07:51:23.225308 env[1219]: time="2024-07-02T07:51:23.225234363Z" level=info msg="cleaning up dead shim" Jul 2 07:51:23.236930 env[1219]: time="2024-07-02T07:51:23.236856709Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3378 runtime=io.containerd.runc.v2\n" Jul 2 07:51:23.515229 env[1219]: time="2024-07-02T07:51:23.514784326Z" level=info msg="CreateContainer within sandbox \"626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:51:23.532997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677711478.mount: Deactivated successfully. Jul 2 07:51:23.546646 env[1219]: time="2024-07-02T07:51:23.546598278Z" level=info msg="CreateContainer within sandbox \"626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e652fb7767d1c0022645c66037e4c9590a448dd35dd1698e041376414b67370a\"" Jul 2 07:51:23.547314 env[1219]: time="2024-07-02T07:51:23.547279263Z" level=info msg="StartContainer for \"e652fb7767d1c0022645c66037e4c9590a448dd35dd1698e041376414b67370a\"" Jul 2 07:51:23.572609 systemd[1]: Started cri-containerd-e652fb7767d1c0022645c66037e4c9590a448dd35dd1698e041376414b67370a.scope. Jul 2 07:51:23.611902 env[1219]: time="2024-07-02T07:51:23.611818813Z" level=info msg="StartContainer for \"e652fb7767d1c0022645c66037e4c9590a448dd35dd1698e041376414b67370a\" returns successfully" Jul 2 07:51:23.619344 systemd[1]: cri-containerd-e652fb7767d1c0022645c66037e4c9590a448dd35dd1698e041376414b67370a.scope: Deactivated successfully. Jul 2 07:51:23.644312 env[1219]: time="2024-07-02T07:51:23.644237898Z" level=info msg="shim disconnected" id=e652fb7767d1c0022645c66037e4c9590a448dd35dd1698e041376414b67370a Jul 2 07:51:23.644312 env[1219]: time="2024-07-02T07:51:23.644298244Z" level=warning msg="cleaning up after shim disconnected" id=e652fb7767d1c0022645c66037e4c9590a448dd35dd1698e041376414b67370a namespace=k8s.io Jul 2 07:51:23.644312 env[1219]: time="2024-07-02T07:51:23.644316285Z" level=info msg="cleaning up dead shim" Jul 2 07:51:23.655436 env[1219]: time="2024-07-02T07:51:23.655376897Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3439 runtime=io.containerd.runc.v2\n" Jul 2 07:51:23.828905 kubelet[1539]: W0702 07:51:23.828483 1539 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0495d4bf_8b9a_4e44_9cfc_66c5a6004068.slice/cri-containerd-f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c.scope WatchSource:0}: container "f73ba8a88165ec90c1df725e71ddb015f4cd7650bc22a7897aff436e1398935c" in namespace "k8s.io": not found Jul 2 07:51:23.934583 kubelet[1539]: I0702 07:51:23.934302 1539 setters.go:568] "Node became not ready" node="10.128.0.56" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T07:51:23Z","lastTransitionTime":"2024-07-02T07:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 07:51:24.052320 kubelet[1539]: E0702 07:51:24.052278 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:24.190039 kubelet[1539]: I0702 07:51:24.189905 1539 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0495d4bf-8b9a-4e44-9cfc-66c5a6004068" path="/var/lib/kubelet/pods/0495d4bf-8b9a-4e44-9cfc-66c5a6004068/volumes" Jul 2 07:51:24.324722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e652fb7767d1c0022645c66037e4c9590a448dd35dd1698e041376414b67370a-rootfs.mount: Deactivated successfully. Jul 2 07:51:24.519070 env[1219]: time="2024-07-02T07:51:24.518614154Z" level=info msg="CreateContainer within sandbox \"626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:51:24.543448 env[1219]: time="2024-07-02T07:51:24.543385971Z" level=info msg="CreateContainer within sandbox \"626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"31463255ffc49ab4eee52cb482f645367889adbde70cb9d163856fe54bf5a0d7\"" Jul 2 07:51:24.543951 env[1219]: time="2024-07-02T07:51:24.543913084Z" level=info msg="StartContainer for \"31463255ffc49ab4eee52cb482f645367889adbde70cb9d163856fe54bf5a0d7\"" Jul 2 07:51:24.583044 systemd[1]: Started cri-containerd-31463255ffc49ab4eee52cb482f645367889adbde70cb9d163856fe54bf5a0d7.scope. Jul 2 07:51:24.628391 env[1219]: time="2024-07-02T07:51:24.628328199Z" level=info msg="StartContainer for \"31463255ffc49ab4eee52cb482f645367889adbde70cb9d163856fe54bf5a0d7\" returns successfully" Jul 2 07:51:24.630038 systemd[1]: cri-containerd-31463255ffc49ab4eee52cb482f645367889adbde70cb9d163856fe54bf5a0d7.scope: Deactivated successfully. Jul 2 07:51:24.657393 systemd[1]: Started sshd@7-10.128.0.56:22-43.155.144.191:35750.service. Jul 2 07:51:24.666990 env[1219]: time="2024-07-02T07:51:24.666848952Z" level=info msg="shim disconnected" id=31463255ffc49ab4eee52cb482f645367889adbde70cb9d163856fe54bf5a0d7 Jul 2 07:51:24.667426 env[1219]: time="2024-07-02T07:51:24.667355424Z" level=warning msg="cleaning up after shim disconnected" id=31463255ffc49ab4eee52cb482f645367889adbde70cb9d163856fe54bf5a0d7 namespace=k8s.io Jul 2 07:51:24.667601 env[1219]: time="2024-07-02T07:51:24.667576197Z" level=info msg="cleaning up dead shim" Jul 2 07:51:24.679510 env[1219]: time="2024-07-02T07:51:24.679466308Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3499 runtime=io.containerd.runc.v2\n" Jul 2 07:51:25.053179 kubelet[1539]: E0702 07:51:25.053119 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:25.325166 systemd[1]: run-containerd-runc-k8s.io-31463255ffc49ab4eee52cb482f645367889adbde70cb9d163856fe54bf5a0d7-runc.mz6m9V.mount: Deactivated successfully. Jul 2 07:51:25.325350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31463255ffc49ab4eee52cb482f645367889adbde70cb9d163856fe54bf5a0d7-rootfs.mount: Deactivated successfully. Jul 2 07:51:25.524416 env[1219]: time="2024-07-02T07:51:25.524356541Z" level=info msg="CreateContainer within sandbox \"626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:51:25.547126 env[1219]: time="2024-07-02T07:51:25.547062915Z" level=info msg="CreateContainer within sandbox \"626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14\"" Jul 2 07:51:25.547773 env[1219]: time="2024-07-02T07:51:25.547727435Z" level=info msg="StartContainer for \"53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14\"" Jul 2 07:51:25.578995 systemd[1]: Started cri-containerd-53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14.scope. Jul 2 07:51:25.619994 systemd[1]: cri-containerd-53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14.scope: Deactivated successfully. Jul 2 07:51:25.623168 env[1219]: time="2024-07-02T07:51:25.623054502Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6e6667f_306e_47bb_8d44_efcfc40c4bd8.slice/cri-containerd-53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14.scope/memory.events\": no such file or directory" Jul 2 07:51:25.624932 env[1219]: time="2024-07-02T07:51:25.624845891Z" level=info msg="StartContainer for \"53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14\" returns successfully" Jul 2 07:51:25.650002 env[1219]: time="2024-07-02T07:51:25.649945157Z" level=info msg="shim disconnected" id=53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14 Jul 2 07:51:25.650282 env[1219]: time="2024-07-02T07:51:25.650015348Z" level=warning msg="cleaning up after shim disconnected" id=53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14 namespace=k8s.io Jul 2 07:51:25.650282 env[1219]: time="2024-07-02T07:51:25.650031344Z" level=info msg="cleaning up dead shim" Jul 2 07:51:25.660436 env[1219]: time="2024-07-02T07:51:25.660372460Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3557 runtime=io.containerd.runc.v2\n" Jul 2 07:51:25.681034 sshd[3498]: Invalid user deploy from 43.155.144.191 port 35750 Jul 2 07:51:25.688359 sshd[3498]: Failed password for invalid user deploy from 43.155.144.191 port 35750 ssh2 Jul 2 07:51:25.881987 sshd[3498]: Received disconnect from 43.155.144.191 port 35750:11: Bye Bye [preauth] Jul 2 07:51:25.881987 sshd[3498]: Disconnected from invalid user deploy 43.155.144.191 port 35750 [preauth] Jul 2 07:51:25.883407 systemd[1]: sshd@7-10.128.0.56:22-43.155.144.191:35750.service: Deactivated successfully. Jul 2 07:51:26.053672 kubelet[1539]: E0702 07:51:26.053613 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:26.325052 systemd[1]: run-containerd-runc-k8s.io-53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14-runc.Tk4EvM.mount: Deactivated successfully. Jul 2 07:51:26.325213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14-rootfs.mount: Deactivated successfully. Jul 2 07:51:26.530502 env[1219]: time="2024-07-02T07:51:26.530442268Z" level=info msg="CreateContainer within sandbox \"626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:51:26.552769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2852672181.mount: Deactivated successfully. Jul 2 07:51:26.561183 env[1219]: time="2024-07-02T07:51:26.561103810Z" level=info msg="CreateContainer within sandbox \"626900459accbfe1e863ee2923e295698654522ee66be8efee9f79893a8ce1f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b04cb657b2cba2d6577cb06fc4a439a330b0730db90341d98cb46f8bd03ce017\"" Jul 2 07:51:26.563192 env[1219]: time="2024-07-02T07:51:26.563143026Z" level=info msg="StartContainer for \"b04cb657b2cba2d6577cb06fc4a439a330b0730db90341d98cb46f8bd03ce017\"" Jul 2 07:51:26.599163 systemd[1]: Started cri-containerd-b04cb657b2cba2d6577cb06fc4a439a330b0730db90341d98cb46f8bd03ce017.scope. Jul 2 07:51:26.644067 env[1219]: time="2024-07-02T07:51:26.644007400Z" level=info msg="StartContainer for \"b04cb657b2cba2d6577cb06fc4a439a330b0730db90341d98cb46f8bd03ce017\" returns successfully" Jul 2 07:51:26.957324 kubelet[1539]: W0702 07:51:26.954801 1539 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6e6667f_306e_47bb_8d44_efcfc40c4bd8.slice/cri-containerd-8c293bbcebde270851c6d68415697e56b6f78b0ec429a0d83ad4c0a63d0f9638.scope WatchSource:0}: task 8c293bbcebde270851c6d68415697e56b6f78b0ec429a0d83ad4c0a63d0f9638 not found: not found Jul 2 07:51:27.054251 kubelet[1539]: E0702 07:51:27.054186 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:27.062908 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:51:27.548959 kubelet[1539]: I0702 07:51:27.548921 1539 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-st9v8" podStartSLOduration=5.548855626 podStartE2EDuration="5.548855626s" podCreationTimestamp="2024-07-02 07:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:51:27.548592178 +0000 UTC m=+66.655676673" watchObservedRunningTime="2024-07-02 07:51:27.548855626 +0000 UTC m=+66.655940093" Jul 2 07:51:27.737472 systemd[1]: run-containerd-runc-k8s.io-b04cb657b2cba2d6577cb06fc4a439a330b0730db90341d98cb46f8bd03ce017-runc.C87Kz8.mount: Deactivated successfully. Jul 2 07:51:28.054946 kubelet[1539]: E0702 07:51:28.054889 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:29.055372 kubelet[1539]: E0702 07:51:29.055264 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:29.887204 systemd-networkd[1028]: lxc_health: Link UP Jul 2 07:51:29.929274 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:51:29.930184 systemd-networkd[1028]: lxc_health: Gained carrier Jul 2 07:51:29.976917 systemd[1]: run-containerd-runc-k8s.io-b04cb657b2cba2d6577cb06fc4a439a330b0730db90341d98cb46f8bd03ce017-runc.ORFZUp.mount: Deactivated successfully. Jul 2 07:51:30.057016 kubelet[1539]: E0702 07:51:30.056966 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:30.074135 kubelet[1539]: W0702 07:51:30.073288 1539 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6e6667f_306e_47bb_8d44_efcfc40c4bd8.slice/cri-containerd-e652fb7767d1c0022645c66037e4c9590a448dd35dd1698e041376414b67370a.scope WatchSource:0}: task e652fb7767d1c0022645c66037e4c9590a448dd35dd1698e041376414b67370a not found: not found Jul 2 07:51:31.058279 kubelet[1539]: E0702 07:51:31.058188 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:31.990200 systemd-networkd[1028]: lxc_health: Gained IPv6LL Jul 2 07:51:32.058465 kubelet[1539]: E0702 07:51:32.058391 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:32.265456 systemd[1]: run-containerd-runc-k8s.io-b04cb657b2cba2d6577cb06fc4a439a330b0730db90341d98cb46f8bd03ce017-runc.t4rzbn.mount: Deactivated successfully. Jul 2 07:51:33.059224 kubelet[1539]: E0702 07:51:33.059174 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:33.182973 kubelet[1539]: W0702 07:51:33.182920 1539 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6e6667f_306e_47bb_8d44_efcfc40c4bd8.slice/cri-containerd-31463255ffc49ab4eee52cb482f645367889adbde70cb9d163856fe54bf5a0d7.scope WatchSource:0}: task 31463255ffc49ab4eee52cb482f645367889adbde70cb9d163856fe54bf5a0d7 not found: not found Jul 2 07:51:34.060675 kubelet[1539]: E0702 07:51:34.060626 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:34.517598 systemd[1]: run-containerd-runc-k8s.io-b04cb657b2cba2d6577cb06fc4a439a330b0730db90341d98cb46f8bd03ce017-runc.oold4e.mount: Deactivated successfully. Jul 2 07:51:35.061984 kubelet[1539]: E0702 07:51:35.061934 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:36.062264 kubelet[1539]: E0702 07:51:36.062198 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:36.292436 kubelet[1539]: W0702 07:51:36.292374 1539 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6e6667f_306e_47bb_8d44_efcfc40c4bd8.slice/cri-containerd-53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14.scope WatchSource:0}: task 53adf1577117f70b2d11de7a000ee0d2b734f6674300e322f6901a2c07c2ac14 not found: not found Jul 2 07:51:36.748147 systemd[1]: run-containerd-runc-k8s.io-b04cb657b2cba2d6577cb06fc4a439a330b0730db90341d98cb46f8bd03ce017-runc.HBXIsA.mount: Deactivated successfully. Jul 2 07:51:37.062434 kubelet[1539]: E0702 07:51:37.062380 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:38.062740 kubelet[1539]: E0702 07:51:38.062677 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:39.063520 kubelet[1539]: E0702 07:51:39.063455 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:51:40.064081 kubelet[1539]: E0702 07:51:40.064022 1539 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"