Jul 2 07:47:21.081122 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:47:21.081160 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:47:21.081177 kernel: BIOS-provided physical RAM map: Jul 2 07:47:21.081191 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jul 2 07:47:21.081204 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jul 2 07:47:21.081217 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jul 2 07:47:21.081238 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jul 2 07:47:21.081252 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jul 2 07:47:21.081265 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jul 2 07:47:21.081279 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jul 2 07:47:21.081293 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jul 2 07:47:21.081307 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jul 2 07:47:21.081320 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jul 2 07:47:21.081335 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jul 2 07:47:21.081356 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jul 2 07:47:21.081371 kernel: NX (Execute Disable) protection: active Jul 2 07:47:21.081386 kernel: efi: EFI v2.70 by EDK II Jul 2 07:47:21.081401 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd2d2018 Jul 2 07:47:21.081416 kernel: random: crng init done Jul 2 07:47:21.081441 kernel: SMBIOS 2.4 present. Jul 2 07:47:21.081455 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024 Jul 2 07:47:21.081470 kernel: Hypervisor detected: KVM Jul 2 07:47:21.081488 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:47:21.081503 kernel: kvm-clock: cpu 0, msr 1f6192001, primary cpu clock Jul 2 07:47:21.088091 kernel: kvm-clock: using sched offset of 12863396405 cycles Jul 2 07:47:21.088109 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:47:21.088124 kernel: tsc: Detected 2299.998 MHz processor Jul 2 07:47:21.088138 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:47:21.088153 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:47:21.088167 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jul 2 07:47:21.088181 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:47:21.088195 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jul 2 07:47:21.088215 kernel: Using GB pages for direct mapping Jul 2 07:47:21.088228 kernel: Secure boot disabled Jul 2 07:47:21.088242 kernel: ACPI: Early table checksum verification disabled Jul 2 07:47:21.088256 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jul 2 07:47:21.088270 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jul 2 07:47:21.088284 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jul 2 07:47:21.088298 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jul 2 07:47:21.088312 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jul 2 07:47:21.088336 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Jul 2 07:47:21.088350 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jul 2 07:47:21.088366 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jul 2 07:47:21.088380 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jul 2 07:47:21.088395 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jul 2 07:47:21.088410 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jul 2 07:47:21.088437 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jul 2 07:47:21.088452 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jul 2 07:47:21.088466 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jul 2 07:47:21.088481 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jul 2 07:47:21.088496 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jul 2 07:47:21.088531 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jul 2 07:47:21.088546 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jul 2 07:47:21.088561 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jul 2 07:47:21.088576 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jul 2 07:47:21.088594 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 07:47:21.088609 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 07:47:21.088624 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 07:47:21.088639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jul 2 07:47:21.088654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jul 2 07:47:21.088669 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jul 2 07:47:21.088684 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jul 2 07:47:21.088699 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jul 2 07:47:21.088714 kernel: Zone ranges: Jul 2 07:47:21.088733 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:47:21.088747 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 07:47:21.088762 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jul 2 07:47:21.088777 kernel: Movable zone start for each node Jul 2 07:47:21.088792 kernel: Early memory node ranges Jul 2 07:47:21.088807 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jul 2 07:47:21.088822 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jul 2 07:47:21.088837 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jul 2 07:47:21.088852 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jul 2 07:47:21.088870 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jul 2 07:47:21.088884 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jul 2 07:47:21.088899 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:47:21.088914 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jul 2 07:47:21.088929 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jul 2 07:47:21.088944 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 2 07:47:21.088959 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jul 2 07:47:21.088973 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 07:47:21.088988 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:47:21.089007 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:47:21.089024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:47:21.089040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:47:21.089057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:47:21.089073 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:47:21.089089 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:47:21.089106 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 07:47:21.089122 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 2 07:47:21.089138 kernel: Booting paravirtualized kernel on KVM Jul 2 07:47:21.089158 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:47:21.089174 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 07:47:21.089190 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 07:47:21.089206 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 07:47:21.089222 kernel: pcpu-alloc: [0] 0 1 Jul 2 07:47:21.089238 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:47:21.089254 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:47:21.089271 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jul 2 07:47:21.089288 kernel: Policy zone: Normal Jul 2 07:47:21.089309 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:47:21.089327 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:47:21.089342 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 07:47:21.089359 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:47:21.089375 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:47:21.089392 kernel: Memory: 7516816K/7860584K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 343512K reserved, 0K cma-reserved) Jul 2 07:47:21.089409 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 07:47:21.089432 kernel: Kernel/User page tables isolation: enabled Jul 2 07:47:21.089452 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:47:21.089468 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:47:21.089484 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:47:21.089500 kernel: rcu: RCU event tracing is enabled. Jul 2 07:47:21.089545 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 07:47:21.089563 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:47:21.089579 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:47:21.089595 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:47:21.089612 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 07:47:21.089634 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 07:47:21.089664 kernel: Console: colour dummy device 80x25 Jul 2 07:47:21.089681 kernel: printk: console [ttyS0] enabled Jul 2 07:47:21.089702 kernel: ACPI: Core revision 20210730 Jul 2 07:47:21.089718 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:47:21.089735 kernel: x2apic enabled Jul 2 07:47:21.089753 kernel: Switched APIC routing to physical x2apic. Jul 2 07:47:21.089770 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jul 2 07:47:21.089788 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 2 07:47:21.089806 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jul 2 07:47:21.089827 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jul 2 07:47:21.089845 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jul 2 07:47:21.089863 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:47:21.089880 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 07:47:21.089897 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 07:47:21.089914 kernel: Spectre V2 : Mitigation: IBRS Jul 2 07:47:21.089932 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:47:21.089953 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:47:21.089970 kernel: RETBleed: Mitigation: IBRS Jul 2 07:47:21.089987 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:47:21.090005 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Jul 2 07:47:21.090023 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:47:21.090040 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 07:47:21.090058 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:47:21.090075 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:47:21.090097 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:47:21.090114 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:47:21.090131 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:47:21.090149 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:47:21.090166 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:47:21.090183 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:47:21.090200 kernel: LSM: Security Framework initializing Jul 2 07:47:21.090217 kernel: SELinux: Initializing. Jul 2 07:47:21.090234 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:47:21.090255 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:47:21.090273 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jul 2 07:47:21.090290 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jul 2 07:47:21.090307 kernel: signal: max sigframe size: 1776 Jul 2 07:47:21.090325 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:47:21.090342 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 07:47:21.090360 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:47:21.090377 kernel: x86: Booting SMP configuration: Jul 2 07:47:21.090394 kernel: .... node #0, CPUs: #1 Jul 2 07:47:21.090415 kernel: kvm-clock: cpu 1, msr 1f6192041, secondary cpu clock Jul 2 07:47:21.090440 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 07:47:21.090459 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 07:47:21.090476 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 07:47:21.090493 kernel: smpboot: Max logical packages: 1 Jul 2 07:47:21.090527 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 2 07:47:21.090543 kernel: devtmpfs: initialized Jul 2 07:47:21.090558 kernel: x86/mm: Memory block size: 128MB Jul 2 07:47:21.090573 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jul 2 07:47:21.090593 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:47:21.090610 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 07:47:21.090628 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:47:21.090645 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:47:21.090663 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:47:21.090680 kernel: audit: type=2000 audit(1719906439.932:1): state=initialized audit_enabled=0 res=1 Jul 2 07:47:21.090696 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:47:21.090714 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:47:21.090732 kernel: cpuidle: using governor menu Jul 2 07:47:21.090753 kernel: ACPI: bus type PCI registered Jul 2 07:47:21.090771 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:47:21.090787 kernel: dca service started, version 1.12.1 Jul 2 07:47:21.090804 kernel: PCI: Using configuration type 1 for base access Jul 2 07:47:21.090822 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:47:21.090840 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:47:21.090858 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:47:21.090875 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:47:21.090889 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:47:21.090908 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:47:21.090922 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:47:21.090937 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:47:21.090951 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:47:21.090967 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:47:21.090981 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 07:47:21.090996 kernel: ACPI: Interpreter enabled Jul 2 07:47:21.091012 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:47:21.091028 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:47:21.091048 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:47:21.091064 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 07:47:21.091080 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:47:21.091309 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:47:21.091505 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 07:47:21.091551 kernel: PCI host bridge to bus 0000:00 Jul 2 07:47:21.091711 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:47:21.091868 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:47:21.092015 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:47:21.092161 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jul 2 07:47:21.092307 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:47:21.092504 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:47:21.092696 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jul 2 07:47:21.092877 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:47:21.093045 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 07:47:21.093223 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jul 2 07:47:21.093392 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jul 2 07:47:21.093587 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jul 2 07:47:21.093770 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:47:21.093931 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jul 2 07:47:21.094090 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jul 2 07:47:21.094256 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:47:21.094419 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:47:21.094602 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jul 2 07:47:21.094623 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:47:21.094639 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:47:21.094656 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:47:21.094676 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:47:21.094692 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:47:21.094708 kernel: iommu: Default domain type: Translated Jul 2 07:47:21.094724 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:47:21.094739 kernel: vgaarb: loaded Jul 2 07:47:21.094755 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:47:21.094772 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:47:21.094788 kernel: PTP clock support registered Jul 2 07:47:21.094804 kernel: Registered efivars operations Jul 2 07:47:21.094823 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:47:21.094838 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:47:21.094854 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jul 2 07:47:21.094870 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jul 2 07:47:21.094885 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jul 2 07:47:21.094901 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jul 2 07:47:21.094916 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:47:21.094932 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:47:21.094948 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:47:21.094967 kernel: pnp: PnP ACPI init Jul 2 07:47:21.094983 kernel: pnp: PnP ACPI: found 7 devices Jul 2 07:47:21.094999 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:47:21.095015 kernel: NET: Registered PF_INET protocol family Jul 2 07:47:21.095030 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 07:47:21.095046 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 07:47:21.095062 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:47:21.095078 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:47:21.095094 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 07:47:21.095113 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 07:47:21.095129 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:47:21.095145 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:47:21.095161 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:47:21.095176 kernel: NET: Registered PF_XDP protocol family Jul 2 07:47:21.095317 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:47:21.095462 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:47:21.095612 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:47:21.095752 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jul 2 07:47:21.095909 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:47:21.095930 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:47:21.095946 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 07:47:21.095962 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Jul 2 07:47:21.095979 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 07:47:21.095995 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 2 07:47:21.096011 kernel: clocksource: Switched to clocksource tsc Jul 2 07:47:21.096031 kernel: Initialise system trusted keyrings Jul 2 07:47:21.096046 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 07:47:21.096062 kernel: Key type asymmetric registered Jul 2 07:47:21.096078 kernel: Asymmetric key parser 'x509' registered Jul 2 07:47:21.096093 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:47:21.096109 kernel: io scheduler mq-deadline registered Jul 2 07:47:21.096125 kernel: io scheduler kyber registered Jul 2 07:47:21.096141 kernel: io scheduler bfq registered Jul 2 07:47:21.096157 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:47:21.096177 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:47:21.096336 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jul 2 07:47:21.096356 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:47:21.103555 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jul 2 07:47:21.103591 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:47:21.103774 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jul 2 07:47:21.103796 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:47:21.103813 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:47:21.103829 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 07:47:21.103852 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jul 2 07:47:21.103868 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jul 2 07:47:21.104030 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jul 2 07:47:21.104053 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:47:21.104070 kernel: i8042: Warning: Keylock active Jul 2 07:47:21.104085 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:47:21.104102 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:47:21.104253 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 07:47:21.104402 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 07:47:21.104565 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T07:47:20 UTC (1719906440) Jul 2 07:47:21.104706 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 07:47:21.104726 kernel: intel_pstate: CPU model not supported Jul 2 07:47:21.104742 kernel: pstore: Registered efi as persistent store backend Jul 2 07:47:21.104759 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:47:21.104774 kernel: Segment Routing with IPv6 Jul 2 07:47:21.104790 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:47:21.104811 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:47:21.104826 kernel: Key type dns_resolver registered Jul 2 07:47:21.104842 kernel: IPI shorthand broadcast: enabled Jul 2 07:47:21.104858 kernel: sched_clock: Marking stable (722413821, 127606261)->(876679938, -26659856) Jul 2 07:47:21.104874 kernel: registered taskstats version 1 Jul 2 07:47:21.104890 kernel: Loading compiled-in X.509 certificates Jul 2 07:47:21.104906 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:47:21.104923 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:47:21.104938 kernel: Key type .fscrypt registered Jul 2 07:47:21.104957 kernel: Key type fscrypt-provisioning registered Jul 2 07:47:21.104973 kernel: pstore: Using crash dump compression: deflate Jul 2 07:47:21.104989 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:47:21.105005 kernel: ima: No architecture policies found Jul 2 07:47:21.105020 kernel: clk: Disabling unused clocks Jul 2 07:47:21.105036 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:47:21.105053 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:47:21.105069 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:47:21.105088 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:47:21.105104 kernel: Run /init as init process Jul 2 07:47:21.105120 kernel: with arguments: Jul 2 07:47:21.105136 kernel: /init Jul 2 07:47:21.105151 kernel: with environment: Jul 2 07:47:21.105166 kernel: HOME=/ Jul 2 07:47:21.105182 kernel: TERM=linux Jul 2 07:47:21.105198 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:47:21.105216 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:47:21.105239 systemd[1]: Detected virtualization kvm. Jul 2 07:47:21.105256 systemd[1]: Detected architecture x86-64. Jul 2 07:47:21.105273 systemd[1]: Running in initrd. Jul 2 07:47:21.105289 systemd[1]: No hostname configured, using default hostname. Jul 2 07:47:21.105306 systemd[1]: Hostname set to . Jul 2 07:47:21.105323 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:47:21.105339 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:47:21.105359 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:47:21.105375 systemd[1]: Reached target cryptsetup.target. Jul 2 07:47:21.105392 systemd[1]: Reached target paths.target. Jul 2 07:47:21.105408 systemd[1]: Reached target slices.target. Jul 2 07:47:21.105430 systemd[1]: Reached target swap.target. Jul 2 07:47:21.105447 systemd[1]: Reached target timers.target. Jul 2 07:47:21.105464 systemd[1]: Listening on iscsid.socket. Jul 2 07:47:21.105481 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:47:21.105501 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:47:21.105529 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:47:21.105546 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:47:21.105562 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:47:21.105579 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:47:21.105596 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:47:21.105613 systemd[1]: Reached target sockets.target. Jul 2 07:47:21.105630 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:47:21.105647 systemd[1]: Finished network-cleanup.service. Jul 2 07:47:21.105667 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:47:21.105684 systemd[1]: Starting systemd-journald.service... Jul 2 07:47:21.105719 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:47:21.105739 systemd[1]: Starting systemd-resolved.service... Jul 2 07:47:21.105757 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:47:21.105774 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:47:21.105795 kernel: audit: type=1130 audit(1719906441.092:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.105812 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:47:21.105830 kernel: audit: type=1130 audit(1719906441.101:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.105851 systemd-journald[189]: Journal started Jul 2 07:47:21.105933 systemd-journald[189]: Runtime Journal (/run/log/journal/8a4be29a4e4a4e8efdb4cf008228968a) is 8.0M, max 148.8M, 140.8M free. Jul 2 07:47:21.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.107170 systemd[1]: Started systemd-journald.service. Jul 2 07:47:21.110130 systemd-modules-load[190]: Inserted module 'overlay' Jul 2 07:47:21.127173 kernel: audit: type=1130 audit(1719906441.109:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.127214 kernel: audit: type=1130 audit(1719906441.116:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.111087 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:47:21.119327 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:47:21.126895 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:47:21.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.153145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:47:21.156528 kernel: audit: type=1130 audit(1719906441.151:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.166114 systemd-resolved[191]: Positive Trust Anchors: Jul 2 07:47:21.167729 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:47:21.168145 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:47:21.177599 systemd-resolved[191]: Defaulting to hostname 'linux'. Jul 2 07:47:21.203646 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:47:21.203696 kernel: audit: type=1130 audit(1719906441.182:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.203723 kernel: Bridge firewalling registered Jul 2 07:47:21.203747 kernel: audit: type=1130 audit(1719906441.188:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.178895 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:47:21.183788 systemd[1]: Started systemd-resolved.service. Jul 2 07:47:21.189772 systemd[1]: Reached target nss-lookup.target. Jul 2 07:47:21.192910 systemd-modules-load[190]: Inserted module 'br_netfilter' Jul 2 07:47:21.197922 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:47:21.224789 dracut-cmdline[206]: dracut-dracut-053 Jul 2 07:47:21.224789 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:47:21.238628 kernel: SCSI subsystem initialized Jul 2 07:47:21.253610 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:47:21.253697 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:47:21.253721 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:47:21.258614 systemd-modules-load[190]: Inserted module 'dm_multipath' Jul 2 07:47:21.260104 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:47:21.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.272807 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:47:21.282639 kernel: audit: type=1130 audit(1719906441.270:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.284422 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:47:21.295639 kernel: audit: type=1130 audit(1719906441.287:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.314545 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:47:21.334555 kernel: iscsi: registered transport (tcp) Jul 2 07:47:21.361037 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:47:21.361123 kernel: QLogic iSCSI HBA Driver Jul 2 07:47:21.406997 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:47:21.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.412232 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:47:21.469589 kernel: raid6: avx2x4 gen() 18162 MB/s Jul 2 07:47:21.487555 kernel: raid6: avx2x4 xor() 6855 MB/s Jul 2 07:47:21.504556 kernel: raid6: avx2x2 gen() 18224 MB/s Jul 2 07:47:21.521553 kernel: raid6: avx2x2 xor() 18612 MB/s Jul 2 07:47:21.538553 kernel: raid6: avx2x1 gen() 14266 MB/s Jul 2 07:47:21.555551 kernel: raid6: avx2x1 xor() 16153 MB/s Jul 2 07:47:21.572541 kernel: raid6: sse2x4 gen() 11091 MB/s Jul 2 07:47:21.589552 kernel: raid6: sse2x4 xor() 6758 MB/s Jul 2 07:47:21.606584 kernel: raid6: sse2x2 gen() 12046 MB/s Jul 2 07:47:21.624581 kernel: raid6: sse2x2 xor() 7311 MB/s Jul 2 07:47:21.641586 kernel: raid6: sse2x1 gen() 10395 MB/s Jul 2 07:47:21.659040 kernel: raid6: sse2x1 xor() 5189 MB/s Jul 2 07:47:21.659083 kernel: raid6: using algorithm avx2x2 gen() 18224 MB/s Jul 2 07:47:21.659105 kernel: raid6: .... xor() 18612 MB/s, rmw enabled Jul 2 07:47:21.659741 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:47:21.674549 kernel: xor: automatically using best checksumming function avx Jul 2 07:47:21.780548 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:47:21.792432 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:47:21.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.792000 audit: BPF prog-id=7 op=LOAD Jul 2 07:47:21.792000 audit: BPF prog-id=8 op=LOAD Jul 2 07:47:21.794801 systemd[1]: Starting systemd-udevd.service... Jul 2 07:47:21.811775 systemd-udevd[388]: Using default interface naming scheme 'v252'. Jul 2 07:47:21.818818 systemd[1]: Started systemd-udevd.service. Jul 2 07:47:21.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.823891 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:47:21.844741 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Jul 2 07:47:21.882692 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:47:21.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:21.884963 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:47:21.949467 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:47:21.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:22.041539 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:47:22.093558 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:47:22.093635 kernel: AES CTR mode by8 optimization enabled Jul 2 07:47:22.099879 kernel: scsi host0: Virtio SCSI HBA Jul 2 07:47:22.114551 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jul 2 07:47:22.187792 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jul 2 07:47:22.188024 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jul 2 07:47:22.189032 kernel: sd 0:0:1:0: [sda] Write Protect is off Jul 2 07:47:22.189274 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jul 2 07:47:22.189477 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 07:47:22.199552 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:47:22.199623 kernel: GPT:17805311 != 25165823 Jul 2 07:47:22.199657 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:47:22.199678 kernel: GPT:17805311 != 25165823 Jul 2 07:47:22.199697 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:47:22.199717 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:47:22.202541 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jul 2 07:47:22.246781 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:47:22.263788 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (442) Jul 2 07:47:22.285134 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:47:22.285366 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:47:22.320614 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:47:22.340609 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:47:22.354772 systemd[1]: Starting disk-uuid.service... Jul 2 07:47:22.375805 disk-uuid[517]: Primary Header is updated. Jul 2 07:47:22.375805 disk-uuid[517]: Secondary Entries is updated. Jul 2 07:47:22.375805 disk-uuid[517]: Secondary Header is updated. Jul 2 07:47:22.401625 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:47:22.409547 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:47:22.432554 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:47:23.426541 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:47:23.426920 disk-uuid[518]: The operation has completed successfully. Jul 2 07:47:23.493424 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:47:23.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:23.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:23.493573 systemd[1]: Finished disk-uuid.service. Jul 2 07:47:23.504633 systemd[1]: Starting verity-setup.service... Jul 2 07:47:23.539651 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 07:47:23.608233 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:47:23.610718 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:47:23.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:23.629990 systemd[1]: Finished verity-setup.service. Jul 2 07:47:23.711564 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:47:23.711602 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:47:23.711970 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:47:23.759415 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:47:23.759568 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:47:23.759585 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:47:23.759620 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:47:23.712913 systemd[1]: Starting ignition-setup.service... Jul 2 07:47:23.767915 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:47:23.791081 systemd[1]: Finished ignition-setup.service. Jul 2 07:47:23.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:23.800988 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:47:23.879000 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:47:23.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:23.887000 audit: BPF prog-id=9 op=LOAD Jul 2 07:47:23.889934 systemd[1]: Starting systemd-networkd.service... Jul 2 07:47:23.924222 systemd-networkd[692]: lo: Link UP Jul 2 07:47:23.924718 systemd-networkd[692]: lo: Gained carrier Jul 2 07:47:23.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:23.926287 systemd-networkd[692]: Enumeration completed Jul 2 07:47:23.926989 systemd-networkd[692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:47:23.927104 systemd[1]: Started systemd-networkd.service. Jul 2 07:47:23.930668 systemd-networkd[692]: eth0: Link UP Jul 2 07:47:23.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:23.930674 systemd-networkd[692]: eth0: Gained carrier Jul 2 07:47:24.007663 iscsid[703]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:47:24.007663 iscsid[703]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 07:47:24.007663 iscsid[703]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:47:24.007663 iscsid[703]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:47:24.007663 iscsid[703]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:47:24.007663 iscsid[703]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:47:24.007663 iscsid[703]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:47:24.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:23.937815 systemd[1]: Reached target network.target. Jul 2 07:47:24.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:24.081150 ignition[610]: Ignition 2.14.0 Jul 2 07:47:24.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:23.941624 systemd-networkd[692]: eth0: DHCPv4 address 10.128.0.9/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 2 07:47:24.081165 ignition[610]: Stage: fetch-offline Jul 2 07:47:23.953748 systemd[1]: Starting iscsiuio.service... Jul 2 07:47:24.081253 ignition[610]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:47:23.978805 systemd[1]: Started iscsiuio.service. Jul 2 07:47:24.081309 ignition[610]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:47:23.994211 systemd[1]: Starting iscsid.service... Jul 2 07:47:24.101964 ignition[610]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:47:24.015799 systemd[1]: Started iscsid.service. Jul 2 07:47:24.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:24.102153 ignition[610]: parsed url from cmdline: "" Jul 2 07:47:24.078346 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:47:24.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:24.102159 ignition[610]: no config URL provided Jul 2 07:47:24.114986 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:47:24.102166 ignition[610]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:47:24.134013 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:47:24.102177 ignition[610]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:47:24.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:24.148841 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:47:24.102187 ignition[610]: failed to fetch config: resource requires networking Jul 2 07:47:24.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:24.162642 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:47:24.102640 ignition[610]: Ignition finished successfully Jul 2 07:47:24.181684 systemd[1]: Reached target remote-fs.target. Jul 2 07:47:24.232266 ignition[717]: Ignition 2.14.0 Jul 2 07:47:24.195820 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:47:24.232276 ignition[717]: Stage: fetch Jul 2 07:47:24.220311 systemd[1]: Starting ignition-fetch.service... Jul 2 07:47:24.232401 ignition[717]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:47:24.237249 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:47:24.232437 ignition[717]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:47:24.248312 unknown[717]: fetched base config from "system" Jul 2 07:47:24.241445 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:47:24.248325 unknown[717]: fetched base config from "system" Jul 2 07:47:24.241670 ignition[717]: parsed url from cmdline: "" Jul 2 07:47:24.248335 unknown[717]: fetched user config from "gcp" Jul 2 07:47:24.241678 ignition[717]: no config URL provided Jul 2 07:47:24.262026 systemd[1]: Finished ignition-fetch.service. Jul 2 07:47:24.241685 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:47:24.278196 systemd[1]: Starting ignition-kargs.service... Jul 2 07:47:24.241697 ignition[717]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:47:24.317068 systemd[1]: Finished ignition-kargs.service. Jul 2 07:47:24.241734 ignition[717]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jul 2 07:47:24.325018 systemd[1]: Starting ignition-disks.service... Jul 2 07:47:24.245219 ignition[717]: GET result: OK Jul 2 07:47:24.348066 systemd[1]: Finished ignition-disks.service. Jul 2 07:47:24.245288 ignition[717]: parsing config with SHA512: 1d379240d77a3f879d1f9b94b38da1212c1ebcbfec9663982a3f3756dd7e6205468b2b088d6cbcf3ca5e6ccf062587b933b71230c7b751643127c5dd132a610a Jul 2 07:47:24.352956 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:47:24.248924 ignition[717]: fetch: fetch complete Jul 2 07:47:24.375801 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:47:24.248931 ignition[717]: fetch: fetch passed Jul 2 07:47:24.391776 systemd[1]: Reached target local-fs.target. Jul 2 07:47:24.248979 ignition[717]: Ignition finished successfully Jul 2 07:47:24.406761 systemd[1]: Reached target sysinit.target. Jul 2 07:47:24.290662 ignition[723]: Ignition 2.14.0 Jul 2 07:47:24.421754 systemd[1]: Reached target basic.target. Jul 2 07:47:24.290672 ignition[723]: Stage: kargs Jul 2 07:47:24.435947 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:47:24.290803 ignition[723]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:47:24.290837 ignition[723]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:47:24.298265 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:47:24.299546 ignition[723]: kargs: kargs passed Jul 2 07:47:24.299611 ignition[723]: Ignition finished successfully Jul 2 07:47:24.335994 ignition[729]: Ignition 2.14.0 Jul 2 07:47:24.336003 ignition[729]: Stage: disks Jul 2 07:47:24.336135 ignition[729]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:47:24.336174 ignition[729]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:47:24.345443 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:47:24.347033 ignition[729]: disks: disks passed Jul 2 07:47:24.347092 ignition[729]: Ignition finished successfully Jul 2 07:47:24.467850 systemd-fsck[737]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 07:47:24.680449 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:47:24.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:24.689743 systemd[1]: Mounting sysroot.mount... Jul 2 07:47:24.718664 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:47:24.714871 systemd[1]: Mounted sysroot.mount. Jul 2 07:47:24.725912 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:47:24.744366 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:47:24.757198 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:47:24.757254 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:47:24.757287 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:47:24.842779 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (743) Jul 2 07:47:24.842814 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:47:24.842829 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:47:24.842844 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:47:24.772965 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:47:24.861165 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:47:24.796434 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:47:24.856859 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:47:24.878671 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:47:24.889636 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:47:24.899645 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:47:24.910635 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:47:24.920645 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:47:24.955643 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:47:24.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:24.956888 systemd[1]: Starting ignition-mount.service... Jul 2 07:47:24.983625 systemd[1]: Starting sysroot-boot.service... Jul 2 07:47:24.991726 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 07:47:24.991849 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 07:47:25.017808 ignition[808]: INFO : Ignition 2.14.0 Jul 2 07:47:25.017808 ignition[808]: INFO : Stage: mount Jul 2 07:47:25.017808 ignition[808]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:47:25.017808 ignition[808]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:47:25.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:25.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:25.026160 systemd[1]: Finished sysroot-boot.service. Jul 2 07:47:25.088675 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:47:25.088675 ignition[808]: INFO : mount: mount passed Jul 2 07:47:25.088675 ignition[808]: INFO : Ignition finished successfully Jul 2 07:47:25.160678 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (818) Jul 2 07:47:25.160719 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:47:25.160752 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:47:25.160776 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:47:25.160799 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:47:25.033192 systemd[1]: Finished ignition-mount.service. Jul 2 07:47:25.050941 systemd[1]: Starting ignition-files.service... Jul 2 07:47:25.085634 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:47:25.183794 ignition[837]: INFO : Ignition 2.14.0 Jul 2 07:47:25.183794 ignition[837]: INFO : Stage: files Jul 2 07:47:25.183794 ignition[837]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:47:25.183794 ignition[837]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:47:25.183794 ignition[837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:47:25.183794 ignition[837]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:47:25.259661 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (840) Jul 2 07:47:25.143453 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:47:25.268685 ignition[837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:47:25.268685 ignition[837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:47:25.268685 ignition[837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:47:25.268685 ignition[837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:47:25.268685 ignition[837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:47:25.268685 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Jul 2 07:47:25.268685 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:47:25.268685 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem768926882" Jul 2 07:47:25.268685 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem768926882": device or resource busy Jul 2 07:47:25.268685 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem768926882", trying btrfs: device or resource busy Jul 2 07:47:25.268685 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem768926882" Jul 2 07:47:25.268685 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem768926882" Jul 2 07:47:25.268685 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem768926882" Jul 2 07:47:25.268685 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem768926882" Jul 2 07:47:25.268685 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Jul 2 07:47:25.268685 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 07:47:25.268685 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 07:47:25.192597 unknown[837]: wrote ssh authorized keys file for user: core Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): op(9): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1116874136" Jul 2 07:47:25.531695 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(8): op(9): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1116874136": device or resource busy Jul 2 07:47:25.531695 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(8): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1116874136", trying btrfs: device or resource busy Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1116874136" Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): op(a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1116874136" Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): op(b): [started] unmounting "/mnt/oem1116874136" Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): op(b): [finished] unmounting "/mnt/oem1116874136" Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:47:25.531695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:47:25.439709 systemd-networkd[692]: eth0: Gained IPv6LL Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2048525141" Jul 2 07:47:25.791695 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2048525141": device or resource busy Jul 2 07:47:25.791695 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2048525141", trying btrfs: device or resource busy Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2048525141" Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2048525141" Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem2048525141" Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem2048525141" Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:47:25.791695 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem197994969" Jul 2 07:47:26.067793 kernel: kauditd_printk_skb: 26 callbacks suppressed Jul 2 07:47:26.067843 kernel: audit: type=1130 audit(1719906446.024:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.004844 systemd[1]: Finished ignition-files.service. Jul 2 07:47:26.082734 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem197994969": device or resource busy Jul 2 07:47:26.082734 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem197994969", trying btrfs: device or resource busy Jul 2 07:47:26.082734 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem197994969" Jul 2 07:47:26.082734 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem197994969" Jul 2 07:47:26.082734 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem197994969" Jul 2 07:47:26.082734 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem197994969" Jul 2 07:47:26.082734 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Jul 2 07:47:26.082734 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:47:26.082734 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 07:47:26.082734 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Jul 2 07:47:26.082734 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:47:26.082734 ignition[837]: INFO : files: op(18): [started] processing unit "oem-gce.service" Jul 2 07:47:26.082734 ignition[837]: INFO : files: op(18): [finished] processing unit "oem-gce.service" Jul 2 07:47:26.082734 ignition[837]: INFO : files: op(19): [started] processing unit "oem-gce-enable-oslogin.service" Jul 2 07:47:26.082734 ignition[837]: INFO : files: op(19): [finished] processing unit "oem-gce-enable-oslogin.service" Jul 2 07:47:26.082734 ignition[837]: INFO : files: op(1a): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:47:26.082734 ignition[837]: INFO : files: op(1a): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:47:26.550694 kernel: audit: type=1130 audit(1719906446.107:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.550747 kernel: audit: type=1130 audit(1719906446.157:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.550771 kernel: audit: type=1131 audit(1719906446.157:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.550808 kernel: audit: type=1130 audit(1719906446.278:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.550828 kernel: audit: type=1131 audit(1719906446.278:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.550842 kernel: audit: type=1130 audit(1719906446.432:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.035617 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:47:26.588730 kernel: audit: type=1131 audit(1719906446.557:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.589018 ignition[837]: INFO : files: op(1b): [started] processing unit "containerd.service" Jul 2 07:47:26.589018 ignition[837]: INFO : files: op(1b): op(1c): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 07:47:26.589018 ignition[837]: INFO : files: op(1b): op(1c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 07:47:26.589018 ignition[837]: INFO : files: op(1b): [finished] processing unit "containerd.service" Jul 2 07:47:26.589018 ignition[837]: INFO : files: op(1d): [started] setting preset to enabled for "oem-gce.service" Jul 2 07:47:26.589018 ignition[837]: INFO : files: op(1d): [finished] setting preset to enabled for "oem-gce.service" Jul 2 07:47:26.589018 ignition[837]: INFO : files: op(1e): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Jul 2 07:47:26.589018 ignition[837]: INFO : files: op(1e): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Jul 2 07:47:26.589018 ignition[837]: INFO : files: op(1f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:47:26.589018 ignition[837]: INFO : files: op(1f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:47:26.589018 ignition[837]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:47:26.589018 ignition[837]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:47:26.589018 ignition[837]: INFO : files: files passed Jul 2 07:47:26.589018 ignition[837]: INFO : Ignition finished successfully Jul 2 07:47:26.865861 kernel: audit: type=1131 audit(1719906446.809:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.075703 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:47:26.916828 kernel: audit: type=1131 audit(1719906446.878:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.916920 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:47:26.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.076902 systemd[1]: Starting ignition-quench.service... Jul 2 07:47:26.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.090040 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:47:26.109323 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:47:26.109470 systemd[1]: Finished ignition-quench.service. Jul 2 07:47:26.159088 systemd[1]: Reached target ignition-complete.target. Jul 2 07:47:26.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.245952 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:47:27.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.279081 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:47:27.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:27.044763 ignition[875]: INFO : Ignition 2.14.0 Jul 2 07:47:27.044763 ignition[875]: INFO : Stage: umount Jul 2 07:47:27.044763 ignition[875]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:47:27.044763 ignition[875]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:47:27.044763 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:47:27.044763 ignition[875]: INFO : umount: umount passed Jul 2 07:47:27.044763 ignition[875]: INFO : Ignition finished successfully Jul 2 07:47:27.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:27.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:27.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:27.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:27.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.279195 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:47:27.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.279977 systemd[1]: Reached target initrd-fs.target. Jul 2 07:47:26.343869 systemd[1]: Reached target initrd.target. Jul 2 07:47:26.367967 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:47:26.369319 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:47:26.393130 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:47:26.435247 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:47:26.488448 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:47:27.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.492977 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:47:27.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.511067 systemd[1]: Stopped target timers.target. Jul 2 07:47:26.537964 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:47:26.538155 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:47:27.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.559215 systemd[1]: Stopped target initrd.target. Jul 2 07:47:27.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.595974 systemd[1]: Stopped target basic.target. Jul 2 07:47:27.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:27.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:27.351000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:47:26.607021 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:47:26.632026 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:47:26.659011 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:47:27.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.677011 systemd[1]: Stopped target remote-fs.target. Jul 2 07:47:27.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.695012 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:47:27.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.713045 systemd[1]: Stopped target sysinit.target. Jul 2 07:47:26.731985 systemd[1]: Stopped target local-fs.target. Jul 2 07:47:26.752010 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:47:27.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.771007 systemd[1]: Stopped target swap.target. Jul 2 07:47:26.789935 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:47:26.790137 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:47:27.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.811214 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:47:27.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.851950 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:47:27.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.852148 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:47:26.880124 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:47:26.880355 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:47:27.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.926948 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:47:27.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.927142 systemd[1]: Stopped ignition-files.service. Jul 2 07:47:27.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:27.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:26.950206 systemd[1]: Stopping ignition-mount.service... Jul 2 07:47:26.981692 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:47:27.651000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:47:27.651000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:47:27.651000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:47:27.652000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:47:27.652000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:47:26.981951 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:47:26.998270 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:47:27.692079 systemd-journald[189]: Failed to send stream file descriptor to service manager: Connection refused Jul 2 07:47:27.692178 systemd-journald[189]: Received SIGTERM from PID 1 (n/a). Jul 2 07:47:27.011640 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:47:27.701688 iscsid[703]: iscsid shutting down. Jul 2 07:47:27.011865 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:47:27.021014 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:47:27.021208 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:47:27.040813 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:47:27.042040 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:47:27.042153 systemd[1]: Stopped ignition-mount.service. Jul 2 07:47:27.053340 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:47:27.053460 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:47:27.060616 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:47:27.060756 systemd[1]: Stopped ignition-disks.service. Jul 2 07:47:27.072872 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:47:27.072944 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:47:27.089891 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 07:47:27.089960 systemd[1]: Stopped ignition-fetch.service. Jul 2 07:47:27.131863 systemd[1]: Stopped target network.target. Jul 2 07:47:27.138832 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:47:27.138913 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:47:27.160832 systemd[1]: Stopped target paths.target. Jul 2 07:47:27.174729 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:47:27.178627 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:47:27.189673 systemd[1]: Stopped target slices.target. Jul 2 07:47:27.202660 systemd[1]: Stopped target sockets.target. Jul 2 07:47:27.219754 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:47:27.219816 systemd[1]: Closed iscsid.socket. Jul 2 07:47:27.234743 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:47:27.234801 systemd[1]: Closed iscsiuio.socket. Jul 2 07:47:27.248725 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:47:27.248823 systemd[1]: Stopped ignition-setup.service. Jul 2 07:47:27.263804 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:47:27.263960 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:47:27.279030 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:47:27.282590 systemd-networkd[692]: eth0: DHCPv6 lease lost Jul 2 07:47:27.293944 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:47:27.301340 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:47:27.301457 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:47:27.323418 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:47:27.323569 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:47:27.338360 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:47:27.338483 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:47:27.353911 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:47:27.353957 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:47:27.369799 systemd[1]: Stopping network-cleanup.service... Jul 2 07:47:27.376798 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:47:27.376885 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:47:27.397884 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:47:27.397977 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:47:27.414981 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:47:27.415048 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:47:27.429984 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:47:27.453222 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:47:27.453883 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:47:27.454038 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:47:27.468249 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:47:27.468341 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:47:27.481822 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:47:27.481873 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:47:27.496777 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:47:27.496847 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:47:27.513864 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:47:27.513941 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:47:27.528845 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:47:27.528940 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:47:27.544844 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:47:27.566647 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:47:27.566764 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:47:27.585387 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:47:27.585534 systemd[1]: Stopped network-cleanup.service. Jul 2 07:47:27.601151 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:47:27.601264 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:47:27.617069 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:47:27.633806 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:47:27.648909 systemd[1]: Switching root. Jul 2 07:47:27.705198 systemd-journald[189]: Journal stopped Jul 2 07:47:32.269065 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:47:32.269173 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:47:32.269197 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:47:32.269219 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:47:32.269240 kernel: SELinux: policy capability open_perms=1 Jul 2 07:47:32.269267 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:47:32.269293 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:47:32.269315 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:47:32.269335 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:47:32.269361 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:47:32.269383 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:47:32.269405 systemd[1]: Successfully loaded SELinux policy in 108.136ms. Jul 2 07:47:32.269446 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.291ms. Jul 2 07:47:32.269470 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:47:32.269496 systemd[1]: Detected virtualization kvm. Jul 2 07:47:32.269538 systemd[1]: Detected architecture x86-64. Jul 2 07:47:32.269560 systemd[1]: Detected first boot. Jul 2 07:47:32.269584 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:47:32.269608 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:47:32.269630 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:47:32.269653 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:47:32.269677 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:47:32.269705 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:47:32.269734 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:47:32.269757 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:47:32.269780 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:47:32.269809 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 07:47:32.269832 systemd[1]: Created slice system-getty.slice. Jul 2 07:47:32.269854 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:47:32.269877 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:47:32.269903 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:47:32.269926 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:47:32.269948 systemd[1]: Created slice user.slice. Jul 2 07:47:32.269972 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:47:32.269993 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:47:32.270017 systemd[1]: Set up automount boot.automount. Jul 2 07:47:32.270040 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:47:32.270062 systemd[1]: Reached target integritysetup.target. Jul 2 07:47:32.270085 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:47:32.270113 systemd[1]: Reached target remote-fs.target. Jul 2 07:47:32.270139 systemd[1]: Reached target slices.target. Jul 2 07:47:32.270162 systemd[1]: Reached target swap.target. Jul 2 07:47:32.270184 systemd[1]: Reached target torcx.target. Jul 2 07:47:32.270206 systemd[1]: Reached target veritysetup.target. Jul 2 07:47:32.270230 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:47:32.270252 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:47:32.270276 kernel: kauditd_printk_skb: 46 callbacks suppressed Jul 2 07:47:32.270302 kernel: audit: type=1400 audit(1719906451.798:86): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:47:32.270323 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:47:32.270346 kernel: audit: type=1335 audit(1719906451.798:87): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 07:47:32.270368 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:47:32.270390 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:47:32.270412 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:47:32.270434 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:47:32.270456 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:47:32.270481 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:47:32.270536 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:47:32.270559 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:47:32.270581 systemd[1]: Mounting media.mount... Jul 2 07:47:32.270604 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:32.270626 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:47:32.270648 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:47:32.270671 systemd[1]: Mounting tmp.mount... Jul 2 07:47:32.270693 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:47:32.270716 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:47:32.270742 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:47:32.270764 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:47:32.270787 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:47:32.270814 systemd[1]: Starting modprobe@drm.service... Jul 2 07:47:32.270837 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:47:32.270859 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:47:32.270881 systemd[1]: Starting modprobe@loop.service... Jul 2 07:47:32.270903 kernel: fuse: init (API version 7.34) Jul 2 07:47:32.270925 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:47:32.270952 kernel: loop: module loaded Jul 2 07:47:32.270973 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 07:47:32.270995 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 07:47:32.271019 systemd[1]: Starting systemd-journald.service... Jul 2 07:47:32.271041 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:47:32.271064 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:47:32.271087 kernel: audit: type=1305 audit(1719906452.265:88): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:47:32.271114 systemd-journald[1040]: Journal started Jul 2 07:47:32.271199 systemd-journald[1040]: Runtime Journal (/run/log/journal/8a4be29a4e4a4e8efdb4cf008228968a) is 8.0M, max 148.8M, 140.8M free. Jul 2 07:47:31.798000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:47:31.798000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 07:47:32.265000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:47:32.316373 kernel: audit: type=1300 audit(1719906452.265:88): arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe6e494e20 a2=4000 a3=7ffe6e494ebc items=0 ppid=1 pid=1040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:47:32.316487 kernel: audit: type=1327 audit(1719906452.265:88): proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:47:32.265000 audit[1040]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe6e494e20 a2=4000 a3=7ffe6e494ebc items=0 ppid=1 pid=1040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:47:32.265000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:47:32.340560 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:47:32.355550 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:47:32.374541 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:32.384555 systemd[1]: Started systemd-journald.service. Jul 2 07:47:32.394840 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:47:32.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.420417 kernel: audit: type=1130 audit(1719906452.391:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.423876 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:47:32.430851 systemd[1]: Mounted media.mount. Jul 2 07:47:32.437813 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:47:32.447842 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:47:32.456866 systemd[1]: Mounted tmp.mount. Jul 2 07:47:32.464994 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:47:32.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.475130 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:47:32.497579 kernel: audit: type=1130 audit(1719906452.473:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.505168 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:47:32.505425 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:47:32.527603 kernel: audit: type=1130 audit(1719906452.503:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.536205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:47:32.536473 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:47:32.580129 kernel: audit: type=1130 audit(1719906452.534:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.580337 kernel: audit: type=1131 audit(1719906452.534:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.589184 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:47:32.589433 systemd[1]: Finished modprobe@drm.service. Jul 2 07:47:32.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.598071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:47:32.598303 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:47:32.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.607127 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:47:32.607356 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:47:32.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.616062 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:47:32.616385 systemd[1]: Finished modprobe@loop.service. Jul 2 07:47:32.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.625172 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:47:32.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.635095 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:47:32.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.644092 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:47:32.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.653086 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:47:32.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.662222 systemd[1]: Reached target network-pre.target. Jul 2 07:47:32.672140 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:47:32.681998 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:47:32.689654 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:47:32.692569 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:47:32.701445 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:47:32.711095 systemd-journald[1040]: Time spent on flushing to /var/log/journal/8a4be29a4e4a4e8efdb4cf008228968a is 92.811ms for 1062 entries. Jul 2 07:47:32.711095 systemd-journald[1040]: System Journal (/var/log/journal/8a4be29a4e4a4e8efdb4cf008228968a) is 8.0M, max 584.8M, 576.8M free. Jul 2 07:47:32.826981 systemd-journald[1040]: Received client request to flush runtime journal. Jul 2 07:47:32.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.710357 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:47:32.712273 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:47:32.726697 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:47:32.728663 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:47:32.737686 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:47:32.828328 udevadm[1062]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:47:32.746396 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:47:32.757395 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:47:32.765834 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:47:32.775078 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:47:32.787143 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:47:32.796299 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:47:32.825044 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:47:32.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.834554 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:47:32.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:32.845422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:47:32.901315 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:47:32.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.431248 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:47:33.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.441399 systemd[1]: Starting systemd-udevd.service... Jul 2 07:47:33.465307 systemd-udevd[1072]: Using default interface naming scheme 'v252'. Jul 2 07:47:33.514071 systemd[1]: Started systemd-udevd.service. Jul 2 07:47:33.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.526792 systemd[1]: Starting systemd-networkd.service... Jul 2 07:47:33.542547 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:47:33.602177 systemd[1]: Found device dev-ttyS0.device. Jul 2 07:47:33.610865 systemd[1]: Started systemd-userdbd.service. Jul 2 07:47:33.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.738534 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1079) Jul 2 07:47:33.763550 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:47:33.772611 systemd-networkd[1087]: lo: Link UP Jul 2 07:47:33.772624 systemd-networkd[1087]: lo: Gained carrier Jul 2 07:47:33.773388 systemd-networkd[1087]: Enumeration completed Jul 2 07:47:33.774199 systemd-networkd[1087]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:47:33.780206 systemd[1]: Started systemd-networkd.service. Jul 2 07:47:33.781613 systemd-networkd[1087]: eth0: Link UP Jul 2 07:47:33.781625 systemd-networkd[1087]: eth0: Gained carrier Jul 2 07:47:33.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.791697 systemd-networkd[1087]: eth0: DHCPv4 address 10.128.0.9/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 2 07:47:33.833538 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:47:33.753000 audit[1076]: AVC avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:47:33.753000 audit[1076]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55af9ec31c90 a1=3207c a2=7f4cfb66ebc5 a3=5 items=108 ppid=1072 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:47:33.753000 audit: CWD cwd="/" Jul 2 07:47:33.753000 audit: PATH item=0 name=(null) inode=1043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=1 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=2 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=3 name=(null) inode=13959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=4 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=5 name=(null) inode=13960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=6 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=7 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=8 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=9 name=(null) inode=13962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=10 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=11 name=(null) inode=13963 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=12 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=13 name=(null) inode=13964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=14 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=15 name=(null) inode=13965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=16 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=17 name=(null) inode=13966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=18 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=19 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=20 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=21 name=(null) inode=13968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=22 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=23 name=(null) inode=13969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=24 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=25 name=(null) inode=13970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=26 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=27 name=(null) inode=13971 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=28 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=29 name=(null) inode=13972 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=30 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=31 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=32 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=33 name=(null) inode=13974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=34 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=35 name=(null) inode=13975 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=36 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=37 name=(null) inode=13976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=38 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=39 name=(null) inode=13977 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=40 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=41 name=(null) inode=13978 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=42 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=43 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=44 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=45 name=(null) inode=13980 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=46 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=47 name=(null) inode=13981 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=48 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=49 name=(null) inode=13982 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=50 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=51 name=(null) inode=13983 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=52 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=53 name=(null) inode=13984 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=54 name=(null) inode=1043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=55 name=(null) inode=13985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=56 name=(null) inode=13985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=57 name=(null) inode=13986 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=58 name=(null) inode=13985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=59 name=(null) inode=13987 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=60 name=(null) inode=13985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=61 name=(null) inode=13988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=62 name=(null) inode=13988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=63 name=(null) inode=13989 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=64 name=(null) inode=13988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=65 name=(null) inode=13990 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=66 name=(null) inode=13988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=67 name=(null) inode=13991 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=68 name=(null) inode=13988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=69 name=(null) inode=13992 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=70 name=(null) inode=13988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=71 name=(null) inode=13993 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=72 name=(null) inode=13985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=73 name=(null) inode=13994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=74 name=(null) inode=13994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=75 name=(null) inode=13995 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=76 name=(null) inode=13994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=77 name=(null) inode=13996 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=78 name=(null) inode=13994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=79 name=(null) inode=13997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=80 name=(null) inode=13994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=81 name=(null) inode=13998 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=82 name=(null) inode=13994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=83 name=(null) inode=13999 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=84 name=(null) inode=13985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=85 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=86 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=87 name=(null) inode=14001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=88 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=89 name=(null) inode=14002 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=90 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=91 name=(null) inode=14003 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=92 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=93 name=(null) inode=14004 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=94 name=(null) inode=14000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=95 name=(null) inode=14005 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=96 name=(null) inode=13985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=97 name=(null) inode=14006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=98 name=(null) inode=14006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=99 name=(null) inode=14007 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=100 name=(null) inode=14006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=101 name=(null) inode=14008 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=102 name=(null) inode=14006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=103 name=(null) inode=14009 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=104 name=(null) inode=14006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=105 name=(null) inode=14010 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=106 name=(null) inode=14006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PATH item=107 name=(null) inode=14011 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:47:33.753000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:47:33.879540 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jul 2 07:47:33.883532 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:47:33.907569 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 2 07:47:33.916543 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 07:47:33.925278 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Jul 2 07:47:33.938562 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 07:47:33.951570 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:47:33.965258 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:47:33.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:33.975442 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:47:34.004867 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:47:34.034139 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:47:34.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.042995 systemd[1]: Reached target cryptsetup.target. Jul 2 07:47:34.053328 systemd[1]: Starting lvm2-activation.service... Jul 2 07:47:34.059485 lvm[1112]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:47:34.089073 systemd[1]: Finished lvm2-activation.service. Jul 2 07:47:34.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.097993 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:47:34.106681 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:47:34.106727 systemd[1]: Reached target local-fs.target. Jul 2 07:47:34.114673 systemd[1]: Reached target machines.target. Jul 2 07:47:34.124345 systemd[1]: Starting ldconfig.service... Jul 2 07:47:34.132537 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:47:34.132634 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:34.134436 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:47:34.143419 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:47:34.152724 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:47:34.162708 systemd[1]: Starting systemd-sysext.service... Jul 2 07:47:34.170266 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1115 (bootctl) Jul 2 07:47:34.172316 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:47:34.188479 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:47:34.200221 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:47:34.203503 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:47:34.228542 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 07:47:34.236676 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:47:34.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.333763 systemd-fsck[1127]: fsck.fat 4.2 (2021-01-31) Jul 2 07:47:34.333763 systemd-fsck[1127]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 07:47:34.337284 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:47:34.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.349772 systemd[1]: Mounting boot.mount... Jul 2 07:47:34.379351 systemd[1]: Mounted boot.mount. Jul 2 07:47:34.402278 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:47:34.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.501707 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:47:34.534211 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:47:34.535375 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:47:34.548650 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 07:47:34.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.575372 (sd-sysext)[1139]: Using extensions 'kubernetes'. Jul 2 07:47:34.576038 (sd-sysext)[1139]: Merged extensions into '/usr'. Jul 2 07:47:34.603217 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:34.605665 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:47:34.613723 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:47:34.616030 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:47:34.626896 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:47:34.635531 systemd[1]: Starting modprobe@loop.service... Jul 2 07:47:34.642913 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:47:34.643148 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:34.643340 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:34.648143 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:47:34.656145 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:47:34.656388 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:47:34.666417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:47:34.666718 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:47:34.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.675294 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:47:34.675945 systemd[1]: Finished modprobe@loop.service. Jul 2 07:47:34.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.683060 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:47:34.683235 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:47:34.686780 systemd[1]: Finished systemd-sysext.service. Jul 2 07:47:34.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:34.696806 systemd[1]: Starting ensure-sysext.service... Jul 2 07:47:34.706701 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:47:34.720738 systemd[1]: Reloading. Jul 2 07:47:34.729178 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:47:34.735681 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:47:34.740298 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:47:34.808941 ldconfig[1114]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:47:34.836105 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2024-07-02T07:47:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:47:34.836153 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2024-07-02T07:47:34Z" level=info msg="torcx already run" Jul 2 07:47:35.016578 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:47:35.016905 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:47:35.044142 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:47:35.127292 systemd[1]: Finished ldconfig.service. Jul 2 07:47:35.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.136527 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:47:35.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.149798 systemd[1]: Starting audit-rules.service... Jul 2 07:47:35.159963 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:47:35.167670 systemd-networkd[1087]: eth0: Gained IPv6LL Jul 2 07:47:35.171061 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:47:35.183052 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:47:35.194723 systemd[1]: Starting systemd-resolved.service... Jul 2 07:47:35.205685 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:47:35.216117 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:47:35.231412 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:47:35.232000 audit[1255]: SYSTEM_BOOT pid=1255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:47:35.235000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:47:35.235000 audit[1257]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf4ba25d0 a2=420 a3=0 items=0 ppid=1225 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:47:35.237637 augenrules[1257]: No rules Jul 2 07:47:35.235000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:47:35.240578 systemd[1]: Finished audit-rules.service. Jul 2 07:47:35.248450 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:47:35.248864 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:47:35.258399 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:47:35.275391 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:35.280689 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.295977 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:47:35.305876 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:47:35.317045 systemd[1]: Starting modprobe@loop.service... Jul 2 07:47:35.328864 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:47:35.336239 enable-oslogin[1270]: /etc/pam.d/sshd already exists. Not enabling OS Login Jul 2 07:47:35.337721 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.338067 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:35.344898 systemd[1]: Starting systemd-update-done.service... Jul 2 07:47:35.352635 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:47:35.352968 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:35.357052 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:47:35.357331 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:47:35.366692 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:47:35.366965 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:47:35.376998 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:47:35.377274 systemd[1]: Finished modprobe@loop.service. Jul 2 07:47:35.386642 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:47:35.387036 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:47:35.395719 systemd[1]: Finished systemd-update-done.service. Jul 2 07:47:35.407976 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:35.408644 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.411138 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:47:35.418964 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:47:35.427680 systemd[1]: Starting modprobe@loop.service... Jul 2 07:47:35.436875 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:47:35.444186 enable-oslogin[1283]: /etc/pam.d/sshd already exists. Not enabling OS Login Jul 2 07:47:35.445692 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.445935 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:35.446120 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:47:35.446281 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:35.448858 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:47:35.458406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:47:35.458694 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:47:35.468486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:47:35.468776 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:47:35.478439 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:47:35.478728 systemd[1]: Finished modprobe@loop.service. Jul 2 07:47:35.485341 systemd-resolved[1244]: Positive Trust Anchors: Jul 2 07:47:35.485832 systemd-resolved[1244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:47:35.485984 systemd-resolved[1244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:47:35.488410 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:47:35.488791 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:47:35.499551 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:47:35.499755 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.505405 systemd-resolved[1244]: Defaulting to hostname 'linux'. Jul 2 07:47:35.506786 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:35.507347 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.509705 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:47:35.518885 systemd[1]: Starting modprobe@drm.service... Jul 2 07:47:35.527821 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:47:35.529607 systemd-timesyncd[1248]: Contacted time server 169.254.169.254:123 (169.254.169.254). Jul 2 07:47:35.530203 systemd-timesyncd[1248]: Initial clock synchronization to Tue 2024-07-02 07:47:35.533738 UTC. Jul 2 07:47:35.536701 systemd[1]: Starting modprobe@loop.service... Jul 2 07:47:35.546557 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:47:35.550789 enable-oslogin[1295]: /etc/pam.d/sshd already exists. Not enabling OS Login Jul 2 07:47:35.554791 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.555040 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:35.557471 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:47:35.565700 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:47:35.565937 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:47:35.567806 systemd[1]: Started systemd-resolved.service. Jul 2 07:47:35.577324 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:47:35.587067 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:47:35.587327 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:47:35.597214 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:47:35.597474 systemd[1]: Finished modprobe@drm.service. Jul 2 07:47:35.606288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:47:35.606578 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:47:35.615176 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:47:35.615586 systemd[1]: Finished modprobe@loop.service. Jul 2 07:47:35.625286 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:47:35.625648 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:47:35.634295 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:47:35.644652 systemd[1]: Reached target network.target. Jul 2 07:47:35.652705 systemd[1]: Reached target network-online.target. Jul 2 07:47:35.660654 systemd[1]: Reached target nss-lookup.target. Jul 2 07:47:35.669646 systemd[1]: Reached target time-set.target. Jul 2 07:47:35.677696 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:47:35.677754 systemd[1]: Reached target sysinit.target. Jul 2 07:47:35.686780 systemd[1]: Started motdgen.path. Jul 2 07:47:35.693723 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:47:35.703876 systemd[1]: Started logrotate.timer. Jul 2 07:47:35.710884 systemd[1]: Started mdadm.timer. Jul 2 07:47:35.717677 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:47:35.725651 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:47:35.725708 systemd[1]: Reached target paths.target. Jul 2 07:47:35.732654 systemd[1]: Reached target timers.target. Jul 2 07:47:35.740351 systemd[1]: Listening on dbus.socket. Jul 2 07:47:35.749125 systemd[1]: Starting docker.socket... Jul 2 07:47:35.758206 systemd[1]: Listening on sshd.socket. Jul 2 07:47:35.765835 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:35.765927 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.766960 systemd[1]: Finished ensure-sysext.service. Jul 2 07:47:35.775854 systemd[1]: Listening on docker.socket. Jul 2 07:47:35.783683 systemd[1]: Reached target sockets.target. Jul 2 07:47:35.791632 systemd[1]: Reached target basic.target. Jul 2 07:47:35.798866 systemd[1]: System is tainted: cgroupsv1 Jul 2 07:47:35.798938 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.798977 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:47:35.800617 systemd[1]: Starting containerd.service... Jul 2 07:47:35.808969 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 07:47:35.819404 systemd[1]: Starting dbus.service... Jul 2 07:47:35.826296 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:47:35.835597 systemd[1]: Starting extend-filesystems.service... Jul 2 07:47:35.842675 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:47:35.845138 systemd[1]: Starting kubelet.service... Jul 2 07:47:35.855551 systemd[1]: Starting motdgen.service... Jul 2 07:47:35.863593 jq[1307]: false Jul 2 07:47:35.865060 systemd[1]: Starting oem-gce.service... Jul 2 07:47:35.871846 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:47:35.880593 systemd[1]: Starting sshd-keygen.service... Jul 2 07:47:35.891168 systemd[1]: Starting systemd-logind.service... Jul 2 07:47:35.898676 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:47:35.898812 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jul 2 07:47:35.900977 systemd[1]: Starting update-engine.service... Jul 2 07:47:35.909415 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:47:35.915876 jq[1331]: true Jul 2 07:47:35.921215 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:47:35.921683 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:47:35.928191 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:47:35.928635 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:47:35.969633 mkfs.ext4[1343]: mke2fs 1.46.5 (30-Dec-2021) Jul 2 07:47:35.977065 mkfs.ext4[1343]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Jul 2 07:47:35.977666 mkfs.ext4[1343]: Creating filesystem with 262144 4k blocks and 65536 inodes Jul 2 07:47:35.977792 mkfs.ext4[1343]: Filesystem UUID: c537b25c-9f9c-404b-8fe0-31c94cf36de6 Jul 2 07:47:35.977905 mkfs.ext4[1343]: Superblock backups stored on blocks: Jul 2 07:47:35.978711 mkfs.ext4[1343]: 32768, 98304, 163840, 229376 Jul 2 07:47:35.979076 mkfs.ext4[1343]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:47:35.979339 mkfs.ext4[1343]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:47:35.983242 mkfs.ext4[1343]: Creating journal (8192 blocks): done Jul 2 07:47:35.987550 extend-filesystems[1308]: Found loop1 Jul 2 07:47:35.987550 extend-filesystems[1308]: Found sda Jul 2 07:47:35.987550 extend-filesystems[1308]: Found sda1 Jul 2 07:47:35.987550 extend-filesystems[1308]: Found sda2 Jul 2 07:47:35.987550 extend-filesystems[1308]: Found sda3 Jul 2 07:47:35.987550 extend-filesystems[1308]: Found usr Jul 2 07:47:35.987550 extend-filesystems[1308]: Found sda4 Jul 2 07:47:35.987550 extend-filesystems[1308]: Found sda6 Jul 2 07:47:35.987550 extend-filesystems[1308]: Found sda7 Jul 2 07:47:35.987550 extend-filesystems[1308]: Found sda9 Jul 2 07:47:35.987550 extend-filesystems[1308]: Checking size of /dev/sda9 Jul 2 07:47:35.994307 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:47:36.082823 extend-filesystems[1308]: Resized partition /dev/sda9 Jul 2 07:47:36.092614 mkfs.ext4[1343]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:47:36.092712 jq[1339]: true Jul 2 07:47:35.994747 systemd[1]: Finished motdgen.service. Jul 2 07:47:36.091639 dbus-daemon[1306]: [system] SELinux support is enabled Jul 2 07:47:36.093343 extend-filesystems[1375]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:47:36.115732 kernel: loop2: detected capacity change from 0 to 2097152 Jul 2 07:47:36.091923 systemd[1]: Started dbus.service. Jul 2 07:47:36.115348 dbus-daemon[1306]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1087 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 07:47:36.109753 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:47:36.109797 systemd[1]: Reached target system-config.target. Jul 2 07:47:36.116752 umount[1358]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Jul 2 07:47:36.127569 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jul 2 07:47:36.132359 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:47:36.132401 systemd[1]: Reached target user-config.target. Jul 2 07:47:36.150688 dbus-daemon[1306]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 07:47:36.157818 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jul 2 07:47:36.156945 systemd[1]: Starting systemd-hostnamed.service... Jul 2 07:47:36.180461 update_engine[1330]: I0702 07:47:36.178527 1330 main.cc:92] Flatcar Update Engine starting Jul 2 07:47:36.185591 extend-filesystems[1375]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 2 07:47:36.185591 extend-filesystems[1375]: old_desc_blocks = 1, new_desc_blocks = 2 Jul 2 07:47:36.185591 extend-filesystems[1375]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jul 2 07:47:36.277709 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:47:36.277757 env[1340]: time="2024-07-02T07:47:36.274766631Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:47:36.186093 systemd[1]: Started update-engine.service. Jul 2 07:47:36.278237 extend-filesystems[1308]: Resized filesystem in /dev/sda9 Jul 2 07:47:36.286680 bash[1379]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:47:36.286838 update_engine[1330]: I0702 07:47:36.187328 1330 update_check_scheduler.cc:74] Next update check in 2m46s Jul 2 07:47:36.212353 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:47:36.212771 systemd[1]: Finished extend-filesystems.service. Jul 2 07:47:36.222448 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:47:36.248636 systemd[1]: Started locksmithd.service. Jul 2 07:47:36.378683 systemd-logind[1323]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:47:36.384498 systemd-logind[1323]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 07:47:36.384754 systemd-logind[1323]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:47:36.388719 systemd-logind[1323]: New seat seat0. Jul 2 07:47:36.407000 systemd[1]: Started systemd-logind.service. Jul 2 07:47:36.500333 coreos-metadata[1305]: Jul 02 07:47:36.499 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jul 2 07:47:36.519436 coreos-metadata[1305]: Jul 02 07:47:36.519 INFO Fetch failed with 404: resource not found Jul 2 07:47:36.519436 coreos-metadata[1305]: Jul 02 07:47:36.519 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jul 2 07:47:36.520789 coreos-metadata[1305]: Jul 02 07:47:36.520 INFO Fetch successful Jul 2 07:47:36.520907 coreos-metadata[1305]: Jul 02 07:47:36.520 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jul 2 07:47:36.522676 coreos-metadata[1305]: Jul 02 07:47:36.522 INFO Fetch failed with 404: resource not found Jul 2 07:47:36.522807 coreos-metadata[1305]: Jul 02 07:47:36.522 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jul 2 07:47:36.524589 coreos-metadata[1305]: Jul 02 07:47:36.524 INFO Fetch failed with 404: resource not found Jul 2 07:47:36.524717 coreos-metadata[1305]: Jul 02 07:47:36.524 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jul 2 07:47:36.525621 coreos-metadata[1305]: Jul 02 07:47:36.525 INFO Fetch successful Jul 2 07:47:36.527740 unknown[1305]: wrote ssh authorized keys file for user: core Jul 2 07:47:36.566577 env[1340]: time="2024-07-02T07:47:36.565324516Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:47:36.570699 env[1340]: time="2024-07-02T07:47:36.570653830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:36.571102 update-ssh-keys[1395]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:47:36.571816 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 07:47:36.594537 env[1340]: time="2024-07-02T07:47:36.593709721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:47:36.594537 env[1340]: time="2024-07-02T07:47:36.593767546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:36.662092 env[1340]: time="2024-07-02T07:47:36.661951918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:47:36.662092 env[1340]: time="2024-07-02T07:47:36.662006673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:36.662092 env[1340]: time="2024-07-02T07:47:36.662030665Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:47:36.662092 env[1340]: time="2024-07-02T07:47:36.662047807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:36.662406 env[1340]: time="2024-07-02T07:47:36.662187727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:36.662599 env[1340]: time="2024-07-02T07:47:36.662569207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:47:36.662931 env[1340]: time="2024-07-02T07:47:36.662898191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:47:36.663002 env[1340]: time="2024-07-02T07:47:36.662932652Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:47:36.663054 env[1340]: time="2024-07-02T07:47:36.663016104Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:47:36.663054 env[1340]: time="2024-07-02T07:47:36.663038659Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:47:36.668828 env[1340]: time="2024-07-02T07:47:36.668780447Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:47:36.668982 env[1340]: time="2024-07-02T07:47:36.668838083Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:47:36.668982 env[1340]: time="2024-07-02T07:47:36.668859204Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:47:36.668982 env[1340]: time="2024-07-02T07:47:36.668912260Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:47:36.668982 env[1340]: time="2024-07-02T07:47:36.668937762Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:47:36.668982 env[1340]: time="2024-07-02T07:47:36.668960975Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:47:36.668982 env[1340]: time="2024-07-02T07:47:36.668980207Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:47:36.669250 env[1340]: time="2024-07-02T07:47:36.669002630Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:47:36.669250 env[1340]: time="2024-07-02T07:47:36.669026463Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:47:36.669250 env[1340]: time="2024-07-02T07:47:36.669048431Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:47:36.669250 env[1340]: time="2024-07-02T07:47:36.669070443Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:47:36.669250 env[1340]: time="2024-07-02T07:47:36.669091866Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:47:36.669250 env[1340]: time="2024-07-02T07:47:36.669243594Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:47:36.669536 env[1340]: time="2024-07-02T07:47:36.669361107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:47:36.670195 env[1340]: time="2024-07-02T07:47:36.670165044Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:47:36.670280 env[1340]: time="2024-07-02T07:47:36.670229920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.670280 env[1340]: time="2024-07-02T07:47:36.670258417Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:47:36.670484 env[1340]: time="2024-07-02T07:47:36.670375605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.670576 env[1340]: time="2024-07-02T07:47:36.670494360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.670576 env[1340]: time="2024-07-02T07:47:36.670539532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.670576 env[1340]: time="2024-07-02T07:47:36.670562176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.670732 env[1340]: time="2024-07-02T07:47:36.670583786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.670732 env[1340]: time="2024-07-02T07:47:36.670624441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.670732 env[1340]: time="2024-07-02T07:47:36.670645662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.670732 env[1340]: time="2024-07-02T07:47:36.670667878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.670732 env[1340]: time="2024-07-02T07:47:36.670712159Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:47:36.670974 env[1340]: time="2024-07-02T07:47:36.670944391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.671034 env[1340]: time="2024-07-02T07:47:36.670972673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.671034 env[1340]: time="2024-07-02T07:47:36.670993924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.671128 env[1340]: time="2024-07-02T07:47:36.671044540Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:47:36.671128 env[1340]: time="2024-07-02T07:47:36.671074182Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:47:36.671222 env[1340]: time="2024-07-02T07:47:36.671123341Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:47:36.671222 env[1340]: time="2024-07-02T07:47:36.671153734Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:47:36.671318 env[1340]: time="2024-07-02T07:47:36.671223030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:47:36.671739 env[1340]: time="2024-07-02T07:47:36.671635496Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:47:36.675373 env[1340]: time="2024-07-02T07:47:36.671765530Z" level=info msg="Connect containerd service" Jul 2 07:47:36.675373 env[1340]: time="2024-07-02T07:47:36.671818230Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:47:36.675373 env[1340]: time="2024-07-02T07:47:36.672872545Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:47:36.675373 env[1340]: time="2024-07-02T07:47:36.673296223Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:47:36.675373 env[1340]: time="2024-07-02T07:47:36.673377796Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:47:36.673650 systemd[1]: Started containerd.service. Jul 2 07:47:36.688782 env[1340]: time="2024-07-02T07:47:36.673799004Z" level=info msg="Start subscribing containerd event" Jul 2 07:47:36.705135 env[1340]: time="2024-07-02T07:47:36.705070024Z" level=info msg="Start recovering state" Jul 2 07:47:36.724031 env[1340]: time="2024-07-02T07:47:36.723985727Z" level=info msg="Start event monitor" Jul 2 07:47:36.724245 env[1340]: time="2024-07-02T07:47:36.724223440Z" level=info msg="Start snapshots syncer" Jul 2 07:47:36.724350 env[1340]: time="2024-07-02T07:47:36.724323938Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:47:36.724435 env[1340]: time="2024-07-02T07:47:36.724417667Z" level=info msg="Start streaming server" Jul 2 07:47:36.724664 env[1340]: time="2024-07-02T07:47:36.704901992Z" level=info msg="containerd successfully booted in 0.465627s" Jul 2 07:47:36.755169 dbus-daemon[1306]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 07:47:36.755402 systemd[1]: Started systemd-hostnamed.service. Jul 2 07:47:36.756083 dbus-daemon[1306]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1380 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 07:47:36.769340 systemd[1]: Starting polkit.service... Jul 2 07:47:36.834111 polkitd[1403]: Started polkitd version 121 Jul 2 07:47:36.857641 polkitd[1403]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 07:47:36.857738 polkitd[1403]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 07:47:36.861061 polkitd[1403]: Finished loading, compiling and executing 2 rules Jul 2 07:47:36.861725 dbus-daemon[1306]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 07:47:36.861951 systemd[1]: Started polkit.service. Jul 2 07:47:36.862655 polkitd[1403]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 07:47:36.884763 systemd-hostnamed[1380]: Hostname set to (transient) Jul 2 07:47:36.887962 systemd-resolved[1244]: System hostname changed to 'ci-3510-3-5-a428487b2d39de818036.c.flatcar-212911.internal'. Jul 2 07:47:38.096359 systemd[1]: Started kubelet.service. Jul 2 07:47:38.815934 locksmithd[1389]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:47:39.537846 sshd_keygen[1348]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:47:39.623058 systemd[1]: Finished sshd-keygen.service. Jul 2 07:47:39.633362 systemd[1]: Starting issuegen.service... Jul 2 07:47:39.646882 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:47:39.647250 systemd[1]: Finished issuegen.service. Jul 2 07:47:39.657315 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:47:39.668941 kubelet[1418]: E0702 07:47:39.668872 1418 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:47:39.672092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:47:39.672352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:47:39.673616 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:47:39.684231 systemd[1]: Started getty@tty1.service. Jul 2 07:47:39.695086 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:47:39.705126 systemd[1]: Reached target getty.target. Jul 2 07:47:41.815137 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Jul 2 07:47:43.776552 kernel: loop2: detected capacity change from 0 to 2097152 Jul 2 07:47:43.790106 systemd-nspawn[1445]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Jul 2 07:47:43.790106 systemd-nspawn[1445]: Press ^] three times within 1s to kill container. Jul 2 07:47:43.805558 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:47:43.877668 systemd[1]: Started oem-gce.service. Jul 2 07:47:43.886128 systemd[1]: Reached target multi-user.target. Jul 2 07:47:43.897085 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:47:43.910456 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:47:43.910874 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:47:43.921406 systemd[1]: Startup finished in 8.161s (kernel) + 15.961s (userspace) = 24.122s. Jul 2 07:47:43.933940 systemd-nspawn[1445]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jul 2 07:47:43.934112 systemd-nspawn[1445]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jul 2 07:47:43.934202 systemd-nspawn[1445]: + /usr/bin/google_instance_setup Jul 2 07:47:44.506825 instance-setup[1453]: INFO Running google_set_multiqueue. Jul 2 07:47:44.522444 instance-setup[1453]: INFO Set channels for eth0 to 2. Jul 2 07:47:44.525584 instance-setup[1453]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Jul 2 07:47:44.526945 instance-setup[1453]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Jul 2 07:47:44.527482 instance-setup[1453]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Jul 2 07:47:44.528711 instance-setup[1453]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Jul 2 07:47:44.529073 instance-setup[1453]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Jul 2 07:47:44.530443 instance-setup[1453]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Jul 2 07:47:44.531013 instance-setup[1453]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Jul 2 07:47:44.532353 instance-setup[1453]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Jul 2 07:47:44.544055 instance-setup[1453]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jul 2 07:47:44.544226 instance-setup[1453]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jul 2 07:47:44.583845 systemd-nspawn[1445]: + /usr/bin/google_metadata_script_runner --script-type startup Jul 2 07:47:44.843281 systemd[1]: Created slice system-sshd.slice. Jul 2 07:47:44.845346 systemd[1]: Started sshd@0-10.128.0.9:22-147.75.109.163:39084.service. Jul 2 07:47:44.919307 startup-script[1484]: INFO Starting startup scripts. Jul 2 07:47:44.932621 startup-script[1484]: INFO No startup scripts found in metadata. Jul 2 07:47:44.932786 startup-script[1484]: INFO Finished running startup scripts. Jul 2 07:47:44.966851 systemd-nspawn[1445]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jul 2 07:47:44.966851 systemd-nspawn[1445]: + daemon_pids=() Jul 2 07:47:44.966851 systemd-nspawn[1445]: + for d in accounts clock_skew network Jul 2 07:47:44.967445 systemd-nspawn[1445]: + daemon_pids+=($!) Jul 2 07:47:44.967606 systemd-nspawn[1445]: + for d in accounts clock_skew network Jul 2 07:47:44.967921 systemd-nspawn[1445]: + daemon_pids+=($!) Jul 2 07:47:44.968003 systemd-nspawn[1445]: + for d in accounts clock_skew network Jul 2 07:47:44.968265 systemd-nspawn[1445]: + /usr/bin/google_clock_skew_daemon Jul 2 07:47:44.968362 systemd-nspawn[1445]: + daemon_pids+=($!) Jul 2 07:47:44.968362 systemd-nspawn[1445]: + NOTIFY_SOCKET=/run/systemd/notify Jul 2 07:47:44.968505 systemd-nspawn[1445]: + /usr/bin/systemd-notify --ready Jul 2 07:47:44.968686 systemd-nspawn[1445]: + /usr/bin/google_accounts_daemon Jul 2 07:47:44.969105 systemd-nspawn[1445]: + /usr/bin/google_network_daemon Jul 2 07:47:45.027421 systemd-nspawn[1445]: + wait -n 36 37 38 Jul 2 07:47:45.187103 sshd[1487]: Accepted publickey for core from 147.75.109.163 port 39084 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:47:45.190545 sshd[1487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:45.210037 systemd[1]: Created slice user-500.slice. Jul 2 07:47:45.211844 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:47:45.219641 systemd-logind[1323]: New session 1 of user core. Jul 2 07:47:45.232257 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:47:45.237604 systemd[1]: Starting user@500.service... Jul 2 07:47:45.273718 (systemd)[1496]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:45.504761 systemd[1496]: Queued start job for default target default.target. Jul 2 07:47:45.505148 systemd[1496]: Reached target paths.target. Jul 2 07:47:45.505179 systemd[1496]: Reached target sockets.target. Jul 2 07:47:45.505202 systemd[1496]: Reached target timers.target. Jul 2 07:47:45.505225 systemd[1496]: Reached target basic.target. Jul 2 07:47:45.505303 systemd[1496]: Reached target default.target. Jul 2 07:47:45.505355 systemd[1496]: Startup finished in 218ms. Jul 2 07:47:45.505567 systemd[1]: Started user@500.service. Jul 2 07:47:45.507278 systemd[1]: Started session-1.scope. Jul 2 07:47:45.729142 systemd[1]: Started sshd@1-10.128.0.9:22-147.75.109.163:39088.service. Jul 2 07:47:45.845133 groupadd[1513]: group added to /etc/group: name=google-sudoers, GID=1000 Jul 2 07:47:45.849375 groupadd[1513]: group added to /etc/gshadow: name=google-sudoers Jul 2 07:47:45.865632 google-clock-skew[1490]: INFO Starting Google Clock Skew daemon. Jul 2 07:47:45.875980 groupadd[1513]: new group: name=google-sudoers, GID=1000 Jul 2 07:47:45.879918 google-clock-skew[1490]: INFO Clock drift token has changed: 0. Jul 2 07:47:45.888504 systemd-nspawn[1445]: hwclock: Cannot access the Hardware Clock via any known method. Jul 2 07:47:45.888812 systemd-nspawn[1445]: hwclock: Use the --verbose option to see the details of our search for an access method. Jul 2 07:47:45.889755 google-clock-skew[1490]: WARNING Failed to sync system time with hardware clock. Jul 2 07:47:45.904421 google-accounts[1489]: INFO Starting Google Accounts daemon. Jul 2 07:47:45.941293 google-accounts[1489]: WARNING OS Login not installed. Jul 2 07:47:45.945113 google-accounts[1489]: INFO Creating a new user account for 0. Jul 2 07:47:45.951667 systemd-nspawn[1445]: useradd: invalid user name '0': use --badname to ignore Jul 2 07:47:45.952364 google-accounts[1489]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jul 2 07:47:45.962269 google-networking[1491]: INFO Starting Google Networking daemon. Jul 2 07:47:46.042068 sshd[1509]: Accepted publickey for core from 147.75.109.163 port 39088 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:47:46.043999 sshd[1509]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:46.050214 systemd-logind[1323]: New session 2 of user core. Jul 2 07:47:46.050960 systemd[1]: Started session-2.scope. Jul 2 07:47:46.256223 sshd[1509]: pam_unix(sshd:session): session closed for user core Jul 2 07:47:46.261648 systemd[1]: sshd@1-10.128.0.9:22-147.75.109.163:39088.service: Deactivated successfully. Jul 2 07:47:46.263270 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:47:46.263864 systemd-logind[1323]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:47:46.265199 systemd-logind[1323]: Removed session 2. Jul 2 07:47:46.299846 systemd[1]: Started sshd@2-10.128.0.9:22-147.75.109.163:39096.service. Jul 2 07:47:46.587562 sshd[1530]: Accepted publickey for core from 147.75.109.163 port 39096 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:47:46.589102 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:46.596022 systemd[1]: Started session-3.scope. Jul 2 07:47:46.596349 systemd-logind[1323]: New session 3 of user core. Jul 2 07:47:46.794914 sshd[1530]: pam_unix(sshd:session): session closed for user core Jul 2 07:47:46.799090 systemd[1]: sshd@2-10.128.0.9:22-147.75.109.163:39096.service: Deactivated successfully. Jul 2 07:47:46.800680 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:47:46.800682 systemd-logind[1323]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:47:46.802367 systemd-logind[1323]: Removed session 3. Jul 2 07:47:46.839745 systemd[1]: Started sshd@3-10.128.0.9:22-147.75.109.163:39108.service. Jul 2 07:47:47.131561 sshd[1537]: Accepted publickey for core from 147.75.109.163 port 39108 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:47:47.133502 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:47.139581 systemd-logind[1323]: New session 4 of user core. Jul 2 07:47:47.140448 systemd[1]: Started session-4.scope. Jul 2 07:47:47.347089 sshd[1537]: pam_unix(sshd:session): session closed for user core Jul 2 07:47:47.351439 systemd-logind[1323]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:47:47.351764 systemd[1]: sshd@3-10.128.0.9:22-147.75.109.163:39108.service: Deactivated successfully. Jul 2 07:47:47.353129 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:47:47.355618 systemd-logind[1323]: Removed session 4. Jul 2 07:47:47.391252 systemd[1]: Started sshd@4-10.128.0.9:22-147.75.109.163:39122.service. Jul 2 07:47:47.684317 sshd[1544]: Accepted publickey for core from 147.75.109.163 port 39122 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:47:47.686154 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:47:47.692875 systemd[1]: Started session-5.scope. Jul 2 07:47:47.693202 systemd-logind[1323]: New session 5 of user core. Jul 2 07:47:47.882058 sudo[1548]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:47:47.882483 sudo[1548]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:47:47.900557 systemd[1]: Starting coreos-metadata.service... Jul 2 07:47:47.949563 coreos-metadata[1552]: Jul 02 07:47:47.949 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jul 2 07:47:47.951740 coreos-metadata[1552]: Jul 02 07:47:47.951 INFO Fetch successful Jul 2 07:47:47.951900 coreos-metadata[1552]: Jul 02 07:47:47.951 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jul 2 07:47:47.953022 coreos-metadata[1552]: Jul 02 07:47:47.952 INFO Fetch successful Jul 2 07:47:47.953152 coreos-metadata[1552]: Jul 02 07:47:47.953 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jul 2 07:47:47.954073 coreos-metadata[1552]: Jul 02 07:47:47.953 INFO Fetch successful Jul 2 07:47:47.954251 coreos-metadata[1552]: Jul 02 07:47:47.954 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jul 2 07:47:47.955095 coreos-metadata[1552]: Jul 02 07:47:47.955 INFO Fetch successful Jul 2 07:47:47.970110 systemd[1]: Finished coreos-metadata.service. Jul 2 07:47:48.896009 systemd[1]: Stopped kubelet.service. Jul 2 07:47:48.899379 systemd[1]: Starting kubelet.service... Jul 2 07:47:48.935753 systemd[1]: Reloading. Jul 2 07:47:49.067403 /usr/lib/systemd/system-generators/torcx-generator[1614]: time="2024-07-02T07:47:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:47:49.067452 /usr/lib/systemd/system-generators/torcx-generator[1614]: time="2024-07-02T07:47:49Z" level=info msg="torcx already run" Jul 2 07:47:49.204760 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:47:49.204786 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:47:49.228800 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:47:49.353248 systemd[1]: Started kubelet.service. Jul 2 07:47:49.358357 systemd[1]: Stopping kubelet.service... Jul 2 07:47:49.360096 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:47:49.360469 systemd[1]: Stopped kubelet.service. Jul 2 07:47:49.368859 systemd[1]: Starting kubelet.service... Jul 2 07:47:49.580446 systemd[1]: Started kubelet.service. Jul 2 07:47:49.646982 kubelet[1679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:47:49.646982 kubelet[1679]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:47:49.646982 kubelet[1679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:47:49.647684 kubelet[1679]: I0702 07:47:49.647086 1679 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:47:49.972836 kubelet[1679]: I0702 07:47:49.972688 1679 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:47:49.972836 kubelet[1679]: I0702 07:47:49.972737 1679 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:47:49.973587 kubelet[1679]: I0702 07:47:49.973543 1679 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:47:50.017107 kubelet[1679]: I0702 07:47:50.017073 1679 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:47:50.042762 kubelet[1679]: I0702 07:47:50.042728 1679 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:47:50.044670 kubelet[1679]: I0702 07:47:50.044640 1679 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:47:50.045005 kubelet[1679]: I0702 07:47:50.044960 1679 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:47:50.045903 kubelet[1679]: I0702 07:47:50.045877 1679 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:47:50.046039 kubelet[1679]: I0702 07:47:50.045908 1679 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:47:50.047353 kubelet[1679]: I0702 07:47:50.047318 1679 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:47:50.051214 kubelet[1679]: I0702 07:47:50.051179 1679 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:47:50.051214 kubelet[1679]: I0702 07:47:50.051214 1679 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:47:50.051393 kubelet[1679]: I0702 07:47:50.051252 1679 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:47:50.051393 kubelet[1679]: I0702 07:47:50.051269 1679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:47:50.051960 kubelet[1679]: E0702 07:47:50.051904 1679 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:47:50.051960 kubelet[1679]: E0702 07:47:50.051978 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:47:50.053437 kubelet[1679]: I0702 07:47:50.053414 1679 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:47:50.059367 kubelet[1679]: W0702 07:47:50.059347 1679 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:47:50.060299 kubelet[1679]: I0702 07:47:50.060275 1679 server.go:1232] "Started kubelet" Jul 2 07:47:50.060428 kubelet[1679]: W0702 07:47:50.060406 1679 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 07:47:50.060500 kubelet[1679]: E0702 07:47:50.060445 1679 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 07:47:50.080470 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:47:50.080705 kubelet[1679]: I0702 07:47:50.080674 1679 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:47:50.082144 kubelet[1679]: I0702 07:47:50.081921 1679 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:47:50.082484 kubelet[1679]: E0702 07:47:50.082445 1679 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:47:50.082665 kubelet[1679]: E0702 07:47:50.082647 1679 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:47:50.083288 kubelet[1679]: E0702 07:47:50.083162 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d272d645eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 60246507, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 60246507, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.083734 kubelet[1679]: I0702 07:47:50.083432 1679 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:47:50.083939 kubelet[1679]: I0702 07:47:50.083902 1679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:47:50.086338 kubelet[1679]: I0702 07:47:50.083971 1679 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:47:50.086502 kubelet[1679]: W0702 07:47:50.060332 1679 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.128.0.9" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 07:47:50.086670 kubelet[1679]: E0702 07:47:50.086652 1679 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.9" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 07:47:50.086808 kubelet[1679]: E0702 07:47:50.086654 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d2742bc771", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 82627441, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 82627441, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.087053 kubelet[1679]: I0702 07:47:50.087036 1679 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:47:50.087600 kubelet[1679]: I0702 07:47:50.087579 1679 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:47:50.087831 kubelet[1679]: I0702 07:47:50.087816 1679 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:47:50.090887 kubelet[1679]: W0702 07:47:50.090865 1679 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 2 07:47:50.091319 kubelet[1679]: E0702 07:47:50.091301 1679 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 2 07:47:50.092037 kubelet[1679]: E0702 07:47:50.091561 1679 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.9\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jul 2 07:47:50.165472 kubelet[1679]: I0702 07:47:50.165442 1679 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:47:50.165801 kubelet[1679]: I0702 07:47:50.165775 1679 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:47:50.165914 kubelet[1679]: I0702 07:47:50.165901 1679 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:47:50.166999 kubelet[1679]: E0702 07:47:50.166895 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd7072", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.9 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163476594, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163476594, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.168621 kubelet[1679]: E0702 07:47:50.168524 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd8b08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.9 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163483400, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163483400, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.169667 kubelet[1679]: E0702 07:47:50.169577 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd9f48", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.9 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163488584, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163488584, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.170022 kubelet[1679]: I0702 07:47:50.169976 1679 policy_none.go:49] "None policy: Start" Jul 2 07:47:50.170847 kubelet[1679]: I0702 07:47:50.170826 1679 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:47:50.170972 kubelet[1679]: I0702 07:47:50.170862 1679 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:47:50.179364 kubelet[1679]: I0702 07:47:50.179319 1679 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:47:50.179664 kubelet[1679]: I0702 07:47:50.179643 1679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:47:50.182566 kubelet[1679]: E0702 07:47:50.182463 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d27a09d2d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 181065432, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 181065432, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.183195 kubelet[1679]: E0702 07:47:50.183175 1679 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.9\" not found" Jul 2 07:47:50.188980 kubelet[1679]: I0702 07:47:50.188960 1679 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.9" Jul 2 07:47:50.190233 kubelet[1679]: E0702 07:47:50.190214 1679 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.9" Jul 2 07:47:50.190784 kubelet[1679]: E0702 07:47:50.190707 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd7072", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.9 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163476594, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 188916027, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events "10.128.0.9.17de55d278fd7072" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.192194 kubelet[1679]: E0702 07:47:50.192130 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd8b08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.9 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163483400, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 188921098, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events "10.128.0.9.17de55d278fd8b08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.193499 kubelet[1679]: E0702 07:47:50.193429 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd9f48", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.9 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163488584, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 188924958, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events "10.128.0.9.17de55d278fd9f48" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.263135 kubelet[1679]: I0702 07:47:50.262937 1679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:47:50.267930 kubelet[1679]: I0702 07:47:50.267899 1679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:47:50.268117 kubelet[1679]: I0702 07:47:50.268099 1679 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:47:50.268391 kubelet[1679]: I0702 07:47:50.268373 1679 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:47:50.269023 kubelet[1679]: E0702 07:47:50.268994 1679 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 07:47:50.270013 kubelet[1679]: W0702 07:47:50.269991 1679 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jul 2 07:47:50.270165 kubelet[1679]: E0702 07:47:50.270147 1679 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jul 2 07:47:50.294589 kubelet[1679]: E0702 07:47:50.294546 1679 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.9\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jul 2 07:47:50.392362 kubelet[1679]: I0702 07:47:50.392304 1679 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.9" Jul 2 07:47:50.393931 kubelet[1679]: E0702 07:47:50.393896 1679 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.9" Jul 2 07:47:50.394176 kubelet[1679]: E0702 07:47:50.393886 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd7072", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.9 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163476594, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 392195709, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events "10.128.0.9.17de55d278fd7072" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.395407 kubelet[1679]: E0702 07:47:50.395295 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd8b08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.9 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163483400, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 392211446, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events "10.128.0.9.17de55d278fd8b08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.396439 kubelet[1679]: E0702 07:47:50.396365 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd9f48", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.9 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163488584, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 392216150, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events "10.128.0.9.17de55d278fd9f48" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.696458 kubelet[1679]: E0702 07:47:50.696326 1679 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.9\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Jul 2 07:47:50.795892 kubelet[1679]: I0702 07:47:50.795846 1679 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.9" Jul 2 07:47:50.797329 kubelet[1679]: E0702 07:47:50.797295 1679 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.9" Jul 2 07:47:50.797540 kubelet[1679]: E0702 07:47:50.797324 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd7072", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.9 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163476594, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 795786973, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events "10.128.0.9.17de55d278fd7072" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.798819 kubelet[1679]: E0702 07:47:50.798734 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd8b08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.9 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163483400, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 795795828, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events "10.128.0.9.17de55d278fd8b08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.799805 kubelet[1679]: E0702 07:47:50.799695 1679 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.9.17de55d278fd9f48", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.9", UID:"10.128.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.9 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.9"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 163488584, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 47, 50, 795799985, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.128.0.9"}': 'events "10.128.0.9.17de55d278fd9f48" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 07:47:50.981900 kubelet[1679]: I0702 07:47:50.981616 1679 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 2 07:47:51.052625 kubelet[1679]: E0702 07:47:51.052559 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:47:51.432521 kubelet[1679]: E0702 07:47:51.432372 1679 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.128.0.9" not found Jul 2 07:47:51.501029 kubelet[1679]: E0702 07:47:51.500967 1679 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.9\" not found" node="10.128.0.9" Jul 2 07:47:51.599211 kubelet[1679]: I0702 07:47:51.599179 1679 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.9" Jul 2 07:47:51.605908 kubelet[1679]: I0702 07:47:51.605870 1679 kubelet_node_status.go:73] "Successfully registered node" node="10.128.0.9" Jul 2 07:47:51.634535 kubelet[1679]: I0702 07:47:51.634492 1679 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 2 07:47:51.634991 env[1340]: time="2024-07-02T07:47:51.634931763Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:47:51.635651 kubelet[1679]: I0702 07:47:51.635207 1679 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 2 07:47:52.053376 kubelet[1679]: E0702 07:47:52.053334 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:47:52.053376 kubelet[1679]: I0702 07:47:52.053356 1679 apiserver.go:52] "Watching apiserver" Jul 2 07:47:52.060476 kubelet[1679]: I0702 07:47:52.060444 1679 topology_manager.go:215] "Topology Admit Handler" podUID="135bb32f-38a8-415e-ad1a-a0431dad4085" podNamespace="kube-system" podName="cilium-757dr" Jul 2 07:47:52.060950 kubelet[1679]: I0702 07:47:52.060925 1679 topology_manager.go:215] "Topology Admit Handler" podUID="78dde7a2-f7cd-4471-a856-e9b46d3dee86" podNamespace="kube-system" podName="kube-proxy-2gppj" Jul 2 07:47:52.090410 kubelet[1679]: I0702 07:47:52.089201 1679 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:47:52.097051 sudo[1548]: pam_unix(sudo:session): session closed for user root Jul 2 07:47:52.099952 kubelet[1679]: I0702 07:47:52.099922 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kxc7\" (UniqueName: \"kubernetes.io/projected/135bb32f-38a8-415e-ad1a-a0431dad4085-kube-api-access-7kxc7\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.100241 kubelet[1679]: I0702 07:47:52.100201 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cni-path\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.100426 kubelet[1679]: I0702 07:47:52.100403 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-etc-cni-netd\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.100564 kubelet[1679]: I0702 07:47:52.100470 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/135bb32f-38a8-415e-ad1a-a0431dad4085-clustermesh-secrets\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.100564 kubelet[1679]: I0702 07:47:52.100551 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-host-proc-sys-net\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.100675 kubelet[1679]: I0702 07:47:52.100588 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-run\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.100675 kubelet[1679]: I0702 07:47:52.100645 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-hostproc\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.100792 kubelet[1679]: I0702 07:47:52.100692 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78dde7a2-f7cd-4471-a856-e9b46d3dee86-lib-modules\") pod \"kube-proxy-2gppj\" (UID: \"78dde7a2-f7cd-4471-a856-e9b46d3dee86\") " pod="kube-system/kube-proxy-2gppj" Jul 2 07:47:52.100792 kubelet[1679]: I0702 07:47:52.100755 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7pqh\" (UniqueName: \"kubernetes.io/projected/78dde7a2-f7cd-4471-a856-e9b46d3dee86-kube-api-access-w7pqh\") pod \"kube-proxy-2gppj\" (UID: \"78dde7a2-f7cd-4471-a856-e9b46d3dee86\") " pod="kube-system/kube-proxy-2gppj" Jul 2 07:47:52.100897 kubelet[1679]: I0702 07:47:52.100818 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-lib-modules\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.100897 kubelet[1679]: I0702 07:47:52.100858 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-host-proc-sys-kernel\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.101001 kubelet[1679]: I0702 07:47:52.100911 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78dde7a2-f7cd-4471-a856-e9b46d3dee86-kube-proxy\") pod \"kube-proxy-2gppj\" (UID: \"78dde7a2-f7cd-4471-a856-e9b46d3dee86\") " pod="kube-system/kube-proxy-2gppj" Jul 2 07:47:52.101001 kubelet[1679]: I0702 07:47:52.100978 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/135bb32f-38a8-415e-ad1a-a0431dad4085-hubble-tls\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.101110 kubelet[1679]: I0702 07:47:52.101028 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78dde7a2-f7cd-4471-a856-e9b46d3dee86-xtables-lock\") pod \"kube-proxy-2gppj\" (UID: \"78dde7a2-f7cd-4471-a856-e9b46d3dee86\") " pod="kube-system/kube-proxy-2gppj" Jul 2 07:47:52.101110 kubelet[1679]: I0702 07:47:52.101086 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-bpf-maps\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.101208 kubelet[1679]: I0702 07:47:52.101138 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-cgroup\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.101208 kubelet[1679]: I0702 07:47:52.101178 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-xtables-lock\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.101309 kubelet[1679]: I0702 07:47:52.101232 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-config-path\") pod \"cilium-757dr\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " pod="kube-system/cilium-757dr" Jul 2 07:47:52.141965 sshd[1544]: pam_unix(sshd:session): session closed for user core Jul 2 07:47:52.146953 systemd[1]: sshd@4-10.128.0.9:22-147.75.109.163:39122.service: Deactivated successfully. Jul 2 07:47:52.149025 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:47:52.150131 systemd-logind[1323]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:47:52.152144 systemd-logind[1323]: Removed session 5. Jul 2 07:47:52.369814 env[1340]: time="2024-07-02T07:47:52.368344766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2gppj,Uid:78dde7a2-f7cd-4471-a856-e9b46d3dee86,Namespace:kube-system,Attempt:0,}" Jul 2 07:47:52.370815 env[1340]: time="2024-07-02T07:47:52.370775827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-757dr,Uid:135bb32f-38a8-415e-ad1a-a0431dad4085,Namespace:kube-system,Attempt:0,}" Jul 2 07:47:52.922908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641099594.mount: Deactivated successfully. Jul 2 07:47:52.930965 env[1340]: time="2024-07-02T07:47:52.930895312Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.932316 env[1340]: time="2024-07-02T07:47:52.932261670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.936310 env[1340]: time="2024-07-02T07:47:52.936249057Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.937468 env[1340]: time="2024-07-02T07:47:52.937415990Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.938327 env[1340]: time="2024-07-02T07:47:52.938291851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.940856 env[1340]: time="2024-07-02T07:47:52.940807949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.941747 env[1340]: time="2024-07-02T07:47:52.941711864Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.949347 env[1340]: time="2024-07-02T07:47:52.949285967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:47:52.970533 env[1340]: time="2024-07-02T07:47:52.970431995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:47:52.970698 env[1340]: time="2024-07-02T07:47:52.970544994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:47:52.970698 env[1340]: time="2024-07-02T07:47:52.970586592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:47:52.970869 env[1340]: time="2024-07-02T07:47:52.970803556Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e pid=1732 runtime=io.containerd.runc.v2 Jul 2 07:47:52.975893 env[1340]: time="2024-07-02T07:47:52.975813892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:47:52.976106 env[1340]: time="2024-07-02T07:47:52.976071730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:47:52.976250 env[1340]: time="2024-07-02T07:47:52.976218861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:47:52.976614 env[1340]: time="2024-07-02T07:47:52.976571010Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/810e314b497d82bdd9e950aeaccb3e9389b2df596bacf5f03cb6601aeadfb2c8 pid=1744 runtime=io.containerd.runc.v2 Jul 2 07:47:53.041608 env[1340]: time="2024-07-02T07:47:53.041543306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-757dr,Uid:135bb32f-38a8-415e-ad1a-a0431dad4085,Namespace:kube-system,Attempt:0,} returns sandbox id \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\"" Jul 2 07:47:53.048145 kubelet[1679]: E0702 07:47:53.048109 1679 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Jul 2 07:47:53.049046 env[1340]: time="2024-07-02T07:47:53.048992622Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:47:53.054058 kubelet[1679]: E0702 07:47:53.054015 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:47:53.065685 env[1340]: time="2024-07-02T07:47:53.065642846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2gppj,Uid:78dde7a2-f7cd-4471-a856-e9b46d3dee86,Namespace:kube-system,Attempt:0,} returns sandbox id \"810e314b497d82bdd9e950aeaccb3e9389b2df596bacf5f03cb6601aeadfb2c8\"" Jul 2 07:47:54.054625 kubelet[1679]: E0702 07:47:54.054547 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:47:55.055244 kubelet[1679]: E0702 07:47:55.055137 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:47:56.055362 kubelet[1679]: E0702 07:47:56.055300 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:47:57.056462 kubelet[1679]: E0702 07:47:57.056391 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:47:58.056975 kubelet[1679]: E0702 07:47:58.056927 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:47:58.134433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434589445.mount: Deactivated successfully. Jul 2 07:47:59.057191 kubelet[1679]: E0702 07:47:59.057111 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:00.057342 kubelet[1679]: E0702 07:48:00.057276 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:01.058070 kubelet[1679]: E0702 07:48:01.058022 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:01.416748 env[1340]: time="2024-07-02T07:48:01.416584201Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:01.419638 env[1340]: time="2024-07-02T07:48:01.419575157Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:01.422597 env[1340]: time="2024-07-02T07:48:01.422558335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:01.423577 env[1340]: time="2024-07-02T07:48:01.423529213Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:48:01.425448 env[1340]: time="2024-07-02T07:48:01.425411978Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 07:48:01.427093 env[1340]: time="2024-07-02T07:48:01.427054802Z" level=info msg="CreateContainer within sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:48:01.445479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759256407.mount: Deactivated successfully. Jul 2 07:48:01.455159 env[1340]: time="2024-07-02T07:48:01.455105279Z" level=info msg="CreateContainer within sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f\"" Jul 2 07:48:01.456379 env[1340]: time="2024-07-02T07:48:01.456334399Z" level=info msg="StartContainer for \"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f\"" Jul 2 07:48:01.546491 env[1340]: time="2024-07-02T07:48:01.546434959Z" level=info msg="StartContainer for \"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f\" returns successfully" Jul 2 07:48:02.058527 kubelet[1679]: E0702 07:48:02.058441 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:02.439254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f-rootfs.mount: Deactivated successfully. Jul 2 07:48:03.059551 kubelet[1679]: E0702 07:48:03.059479 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:03.399108 env[1340]: time="2024-07-02T07:48:03.398659303Z" level=info msg="shim disconnected" id=a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f Jul 2 07:48:03.399108 env[1340]: time="2024-07-02T07:48:03.398732517Z" level=warning msg="cleaning up after shim disconnected" id=a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f namespace=k8s.io Jul 2 07:48:03.399108 env[1340]: time="2024-07-02T07:48:03.398748916Z" level=info msg="cleaning up dead shim" Jul 2 07:48:03.411567 env[1340]: time="2024-07-02T07:48:03.411502925Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1856 runtime=io.containerd.runc.v2\n" Jul 2 07:48:04.060366 kubelet[1679]: E0702 07:48:04.060295 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:04.213990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444028689.mount: Deactivated successfully. Jul 2 07:48:04.314769 env[1340]: time="2024-07-02T07:48:04.314265133Z" level=info msg="CreateContainer within sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:48:04.338239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2637549253.mount: Deactivated successfully. Jul 2 07:48:04.347134 env[1340]: time="2024-07-02T07:48:04.347072292Z" level=info msg="CreateContainer within sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880\"" Jul 2 07:48:04.348527 env[1340]: time="2024-07-02T07:48:04.348473921Z" level=info msg="StartContainer for \"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880\"" Jul 2 07:48:04.453107 env[1340]: time="2024-07-02T07:48:04.453052751Z" level=info msg="StartContainer for \"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880\" returns successfully" Jul 2 07:48:04.461415 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:48:04.467013 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:48:04.467225 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:48:04.470951 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:48:04.494988 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:48:04.618718 env[1340]: time="2024-07-02T07:48:04.617937141Z" level=info msg="shim disconnected" id=455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880 Jul 2 07:48:04.618718 env[1340]: time="2024-07-02T07:48:04.618006116Z" level=warning msg="cleaning up after shim disconnected" id=455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880 namespace=k8s.io Jul 2 07:48:04.618718 env[1340]: time="2024-07-02T07:48:04.618020784Z" level=info msg="cleaning up dead shim" Jul 2 07:48:04.646795 env[1340]: time="2024-07-02T07:48:04.646735123Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1922 runtime=io.containerd.runc.v2\n" Jul 2 07:48:05.061978 kubelet[1679]: E0702 07:48:05.061408 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:05.076265 env[1340]: time="2024-07-02T07:48:05.076203539Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:05.078820 env[1340]: time="2024-07-02T07:48:05.078768855Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:05.081026 env[1340]: time="2024-07-02T07:48:05.080975795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:05.083169 env[1340]: time="2024-07-02T07:48:05.083129942Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:05.083934 env[1340]: time="2024-07-02T07:48:05.083893680Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 07:48:05.086446 env[1340]: time="2024-07-02T07:48:05.086405439Z" level=info msg="CreateContainer within sandbox \"810e314b497d82bdd9e950aeaccb3e9389b2df596bacf5f03cb6601aeadfb2c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:48:05.106758 env[1340]: time="2024-07-02T07:48:05.106696547Z" level=info msg="CreateContainer within sandbox \"810e314b497d82bdd9e950aeaccb3e9389b2df596bacf5f03cb6601aeadfb2c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"42cbdb57605a20370b199bb6bf9bca9a53793459c781fe61cb0cd3fa18361d78\"" Jul 2 07:48:05.107646 env[1340]: time="2024-07-02T07:48:05.107596030Z" level=info msg="StartContainer for \"42cbdb57605a20370b199bb6bf9bca9a53793459c781fe61cb0cd3fa18361d78\"" Jul 2 07:48:05.189804 env[1340]: time="2024-07-02T07:48:05.189744307Z" level=info msg="StartContainer for \"42cbdb57605a20370b199bb6bf9bca9a53793459c781fe61cb0cd3fa18361d78\" returns successfully" Jul 2 07:48:05.327097 kubelet[1679]: I0702 07:48:05.326984 1679 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2gppj" podStartSLOduration=2.309776346 podCreationTimestamp="2024-07-02 07:47:51 +0000 UTC" firstStartedPulling="2024-07-02 07:47:53.0671101 +0000 UTC m=+3.474703789" lastFinishedPulling="2024-07-02 07:48:05.084232542 +0000 UTC m=+15.491826230" observedRunningTime="2024-07-02 07:48:05.326164545 +0000 UTC m=+15.733758242" watchObservedRunningTime="2024-07-02 07:48:05.326898787 +0000 UTC m=+15.734492467" Jul 2 07:48:05.328118 env[1340]: time="2024-07-02T07:48:05.328070116Z" level=info msg="CreateContainer within sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:48:05.346021 env[1340]: time="2024-07-02T07:48:05.345961007Z" level=info msg="CreateContainer within sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b\"" Jul 2 07:48:05.346711 env[1340]: time="2024-07-02T07:48:05.346669096Z" level=info msg="StartContainer for \"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b\"" Jul 2 07:48:05.442427 env[1340]: time="2024-07-02T07:48:05.442371553Z" level=info msg="StartContainer for \"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b\" returns successfully" Jul 2 07:48:05.557912 env[1340]: time="2024-07-02T07:48:05.557847944Z" level=info msg="shim disconnected" id=3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b Jul 2 07:48:05.557912 env[1340]: time="2024-07-02T07:48:05.557914298Z" level=warning msg="cleaning up after shim disconnected" id=3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b namespace=k8s.io Jul 2 07:48:05.558671 env[1340]: time="2024-07-02T07:48:05.557929274Z" level=info msg="cleaning up dead shim" Jul 2 07:48:05.576211 env[1340]: time="2024-07-02T07:48:05.575761787Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2092 runtime=io.containerd.runc.v2\n" Jul 2 07:48:06.062619 kubelet[1679]: E0702 07:48:06.062558 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:06.326448 env[1340]: time="2024-07-02T07:48:06.326297323Z" level=info msg="CreateContainer within sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:48:06.345199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1419523533.mount: Deactivated successfully. Jul 2 07:48:06.355174 env[1340]: time="2024-07-02T07:48:06.355112136Z" level=info msg="CreateContainer within sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3\"" Jul 2 07:48:06.356010 env[1340]: time="2024-07-02T07:48:06.355950025Z" level=info msg="StartContainer for \"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3\"" Jul 2 07:48:06.425366 env[1340]: time="2024-07-02T07:48:06.425312368Z" level=info msg="StartContainer for \"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3\" returns successfully" Jul 2 07:48:06.451680 env[1340]: time="2024-07-02T07:48:06.451615198Z" level=info msg="shim disconnected" id=eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3 Jul 2 07:48:06.451680 env[1340]: time="2024-07-02T07:48:06.451680932Z" level=warning msg="cleaning up after shim disconnected" id=eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3 namespace=k8s.io Jul 2 07:48:06.451680 env[1340]: time="2024-07-02T07:48:06.451696358Z" level=info msg="cleaning up dead shim" Jul 2 07:48:06.462320 env[1340]: time="2024-07-02T07:48:06.462255691Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2196 runtime=io.containerd.runc.v2\n" Jul 2 07:48:06.905949 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 07:48:07.063212 kubelet[1679]: E0702 07:48:07.063158 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:07.333869 env[1340]: time="2024-07-02T07:48:07.333407201Z" level=info msg="CreateContainer within sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:48:07.358494 env[1340]: time="2024-07-02T07:48:07.358436730Z" level=info msg="CreateContainer within sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\"" Jul 2 07:48:07.359532 env[1340]: time="2024-07-02T07:48:07.359472826Z" level=info msg="StartContainer for \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\"" Jul 2 07:48:07.442540 env[1340]: time="2024-07-02T07:48:07.442454283Z" level=info msg="StartContainer for \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\" returns successfully" Jul 2 07:48:07.634352 kubelet[1679]: I0702 07:48:07.633240 1679 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 07:48:07.941588 kernel: Initializing XFRM netlink socket Jul 2 07:48:08.064407 kubelet[1679]: E0702 07:48:08.064350 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:08.356784 kubelet[1679]: I0702 07:48:08.356372 1679 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-757dr" podStartSLOduration=8.976328339 podCreationTimestamp="2024-07-02 07:47:51 +0000 UTC" firstStartedPulling="2024-07-02 07:47:53.044103872 +0000 UTC m=+3.451697548" lastFinishedPulling="2024-07-02 07:48:01.424082607 +0000 UTC m=+11.831676344" observedRunningTime="2024-07-02 07:48:08.35628533 +0000 UTC m=+18.763879028" watchObservedRunningTime="2024-07-02 07:48:08.356307135 +0000 UTC m=+18.763900819" Jul 2 07:48:09.064863 kubelet[1679]: E0702 07:48:09.064798 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:09.611648 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:48:09.612016 systemd-networkd[1087]: cilium_host: Link UP Jul 2 07:48:09.612258 systemd-networkd[1087]: cilium_net: Link UP Jul 2 07:48:09.612265 systemd-networkd[1087]: cilium_net: Gained carrier Jul 2 07:48:09.612557 systemd-networkd[1087]: cilium_host: Gained carrier Jul 2 07:48:09.617097 systemd-networkd[1087]: cilium_host: Gained IPv6LL Jul 2 07:48:09.743640 systemd-networkd[1087]: cilium_vxlan: Link UP Jul 2 07:48:09.743651 systemd-networkd[1087]: cilium_vxlan: Gained carrier Jul 2 07:48:09.863759 systemd-networkd[1087]: cilium_net: Gained IPv6LL Jul 2 07:48:10.005609 kernel: NET: Registered PF_ALG protocol family Jul 2 07:48:10.051950 kubelet[1679]: E0702 07:48:10.051854 1679 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:10.065334 kubelet[1679]: E0702 07:48:10.065259 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:10.832698 systemd-networkd[1087]: lxc_health: Link UP Jul 2 07:48:10.864586 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:48:10.868136 systemd-networkd[1087]: lxc_health: Gained carrier Jul 2 07:48:11.066497 kubelet[1679]: E0702 07:48:11.066442 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:11.177529 kubelet[1679]: I0702 07:48:11.177466 1679 topology_manager.go:215] "Topology Admit Handler" podUID="9c63a706-27ef-46f1-9cd3-1c7cc8b88daf" podNamespace="default" podName="nginx-deployment-6d5f899847-2ksln" Jul 2 07:48:11.225589 kubelet[1679]: I0702 07:48:11.225389 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8zvd\" (UniqueName: \"kubernetes.io/projected/9c63a706-27ef-46f1-9cd3-1c7cc8b88daf-kube-api-access-d8zvd\") pod \"nginx-deployment-6d5f899847-2ksln\" (UID: \"9c63a706-27ef-46f1-9cd3-1c7cc8b88daf\") " pod="default/nginx-deployment-6d5f899847-2ksln" Jul 2 07:48:11.264213 systemd-networkd[1087]: cilium_vxlan: Gained IPv6LL Jul 2 07:48:11.488832 env[1340]: time="2024-07-02T07:48:11.487786351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2ksln,Uid:9c63a706-27ef-46f1-9cd3-1c7cc8b88daf,Namespace:default,Attempt:0,}" Jul 2 07:48:11.563150 systemd-networkd[1087]: lxc539e52454415: Link UP Jul 2 07:48:11.571556 kernel: eth0: renamed from tmpc53df Jul 2 07:48:11.593818 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc539e52454415: link becomes ready Jul 2 07:48:11.597700 systemd-networkd[1087]: lxc539e52454415: Gained carrier Jul 2 07:48:12.066908 kubelet[1679]: E0702 07:48:12.066846 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:12.544329 systemd-networkd[1087]: lxc_health: Gained IPv6LL Jul 2 07:48:13.067858 kubelet[1679]: E0702 07:48:13.067804 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:13.184227 systemd-networkd[1087]: lxc539e52454415: Gained IPv6LL Jul 2 07:48:14.069329 kubelet[1679]: E0702 07:48:14.069277 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:15.071226 kubelet[1679]: E0702 07:48:15.071167 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:16.071858 kubelet[1679]: E0702 07:48:16.071807 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:16.084188 env[1340]: time="2024-07-02T07:48:16.084093465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:16.084839 env[1340]: time="2024-07-02T07:48:16.084145666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:16.084839 env[1340]: time="2024-07-02T07:48:16.084202401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:16.084839 env[1340]: time="2024-07-02T07:48:16.084625803Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c53df7e7b793ea3a7e46fb83cc3a1801b2a2f48a0cf59e20bff4bc4bb9793db3 pid=2711 runtime=io.containerd.runc.v2 Jul 2 07:48:16.116738 systemd[1]: run-containerd-runc-k8s.io-c53df7e7b793ea3a7e46fb83cc3a1801b2a2f48a0cf59e20bff4bc4bb9793db3-runc.bgAfof.mount: Deactivated successfully. Jul 2 07:48:16.183233 env[1340]: time="2024-07-02T07:48:16.183180422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2ksln,Uid:9c63a706-27ef-46f1-9cd3-1c7cc8b88daf,Namespace:default,Attempt:0,} returns sandbox id \"c53df7e7b793ea3a7e46fb83cc3a1801b2a2f48a0cf59e20bff4bc4bb9793db3\"" Jul 2 07:48:16.185992 env[1340]: time="2024-07-02T07:48:16.185954439Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 07:48:17.047222 kubelet[1679]: I0702 07:48:17.047171 1679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:48:17.072560 kubelet[1679]: E0702 07:48:17.072483 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:18.073340 kubelet[1679]: E0702 07:48:18.073252 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:18.741783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4222002598.mount: Deactivated successfully. Jul 2 07:48:19.074863 kubelet[1679]: E0702 07:48:19.074381 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:20.074914 kubelet[1679]: E0702 07:48:20.074806 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:20.427992 env[1340]: time="2024-07-02T07:48:20.427929035Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:20.430416 env[1340]: time="2024-07-02T07:48:20.430371205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:20.433400 env[1340]: time="2024-07-02T07:48:20.433358417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:20.435618 env[1340]: time="2024-07-02T07:48:20.435577323Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:20.436614 env[1340]: time="2024-07-02T07:48:20.436563321Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 07:48:20.439098 env[1340]: time="2024-07-02T07:48:20.439058172Z" level=info msg="CreateContainer within sandbox \"c53df7e7b793ea3a7e46fb83cc3a1801b2a2f48a0cf59e20bff4bc4bb9793db3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 2 07:48:20.461684 env[1340]: time="2024-07-02T07:48:20.461629422Z" level=info msg="CreateContainer within sandbox \"c53df7e7b793ea3a7e46fb83cc3a1801b2a2f48a0cf59e20bff4bc4bb9793db3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"23336bb468f0169feaab0590f9d3e74db2cd912cff8b39bf88d01c02e8964fa1\"" Jul 2 07:48:20.463308 env[1340]: time="2024-07-02T07:48:20.463267921Z" level=info msg="StartContainer for \"23336bb468f0169feaab0590f9d3e74db2cd912cff8b39bf88d01c02e8964fa1\"" Jul 2 07:48:20.546550 env[1340]: time="2024-07-02T07:48:20.546417375Z" level=info msg="StartContainer for \"23336bb468f0169feaab0590f9d3e74db2cd912cff8b39bf88d01c02e8964fa1\" returns successfully" Jul 2 07:48:21.075999 kubelet[1679]: E0702 07:48:21.075938 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:21.405423 kubelet[1679]: I0702 07:48:21.405373 1679 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-2ksln" podStartSLOduration=6.153493611 podCreationTimestamp="2024-07-02 07:48:11 +0000 UTC" firstStartedPulling="2024-07-02 07:48:16.18524152 +0000 UTC m=+26.592835203" lastFinishedPulling="2024-07-02 07:48:20.437037729 +0000 UTC m=+30.844631417" observedRunningTime="2024-07-02 07:48:21.405207859 +0000 UTC m=+31.812801557" watchObservedRunningTime="2024-07-02 07:48:21.405289825 +0000 UTC m=+31.812883520" Jul 2 07:48:21.813960 update_engine[1330]: I0702 07:48:21.813470 1330 update_attempter.cc:509] Updating boot flags... Jul 2 07:48:22.076570 kubelet[1679]: E0702 07:48:22.076391 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:23.076852 kubelet[1679]: E0702 07:48:23.076785 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:24.077670 kubelet[1679]: E0702 07:48:24.077612 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:25.078656 kubelet[1679]: E0702 07:48:25.078589 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:25.162038 kubelet[1679]: I0702 07:48:25.161984 1679 topology_manager.go:215] "Topology Admit Handler" podUID="9dbea009-cf33-4b25-b193-163fb8a14418" podNamespace="default" podName="nfs-server-provisioner-0" Jul 2 07:48:25.234770 kubelet[1679]: I0702 07:48:25.234708 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9fks\" (UniqueName: \"kubernetes.io/projected/9dbea009-cf33-4b25-b193-163fb8a14418-kube-api-access-t9fks\") pod \"nfs-server-provisioner-0\" (UID: \"9dbea009-cf33-4b25-b193-163fb8a14418\") " pod="default/nfs-server-provisioner-0" Jul 2 07:48:25.234992 kubelet[1679]: I0702 07:48:25.234842 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9dbea009-cf33-4b25-b193-163fb8a14418-data\") pod \"nfs-server-provisioner-0\" (UID: \"9dbea009-cf33-4b25-b193-163fb8a14418\") " pod="default/nfs-server-provisioner-0" Jul 2 07:48:25.467559 env[1340]: time="2024-07-02T07:48:25.467476642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9dbea009-cf33-4b25-b193-163fb8a14418,Namespace:default,Attempt:0,}" Jul 2 07:48:25.516716 systemd-networkd[1087]: lxc2632e6b8a99c: Link UP Jul 2 07:48:25.528772 kernel: eth0: renamed from tmp19ed6 Jul 2 07:48:25.549823 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:48:25.549996 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2632e6b8a99c: link becomes ready Jul 2 07:48:25.551999 systemd-networkd[1087]: lxc2632e6b8a99c: Gained carrier Jul 2 07:48:25.814943 env[1340]: time="2024-07-02T07:48:25.814765475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:25.815150 env[1340]: time="2024-07-02T07:48:25.814820965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:25.815150 env[1340]: time="2024-07-02T07:48:25.814848228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:25.815791 env[1340]: time="2024-07-02T07:48:25.815681002Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19ed61e6f31a72aca9011d53f4ad1645cde81454d68ee1b43aa664fe29b533a5 pid=2855 runtime=io.containerd.runc.v2 Jul 2 07:48:25.850292 systemd[1]: run-containerd-runc-k8s.io-19ed61e6f31a72aca9011d53f4ad1645cde81454d68ee1b43aa664fe29b533a5-runc.jlIwkk.mount: Deactivated successfully. Jul 2 07:48:25.905722 env[1340]: time="2024-07-02T07:48:25.905669369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9dbea009-cf33-4b25-b193-163fb8a14418,Namespace:default,Attempt:0,} returns sandbox id \"19ed61e6f31a72aca9011d53f4ad1645cde81454d68ee1b43aa664fe29b533a5\"" Jul 2 07:48:25.908279 env[1340]: time="2024-07-02T07:48:25.908070750Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 2 07:48:26.078913 kubelet[1679]: E0702 07:48:26.078771 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:27.079761 kubelet[1679]: E0702 07:48:27.079706 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:27.327864 systemd-networkd[1087]: lxc2632e6b8a99c: Gained IPv6LL Jul 2 07:48:28.080967 kubelet[1679]: E0702 07:48:28.080890 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:28.655583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount96407215.mount: Deactivated successfully. Jul 2 07:48:29.081935 kubelet[1679]: E0702 07:48:29.081526 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:30.051917 kubelet[1679]: E0702 07:48:30.051835 1679 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:30.082294 kubelet[1679]: E0702 07:48:30.082141 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:31.026260 env[1340]: time="2024-07-02T07:48:31.026190795Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:31.029482 env[1340]: time="2024-07-02T07:48:31.029434376Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:31.031985 env[1340]: time="2024-07-02T07:48:31.031939539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:31.034354 env[1340]: time="2024-07-02T07:48:31.034310615Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:31.035402 env[1340]: time="2024-07-02T07:48:31.035351047Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 2 07:48:31.038830 env[1340]: time="2024-07-02T07:48:31.038783350Z" level=info msg="CreateContainer within sandbox \"19ed61e6f31a72aca9011d53f4ad1645cde81454d68ee1b43aa664fe29b533a5\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 2 07:48:31.055822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473490148.mount: Deactivated successfully. Jul 2 07:48:31.063839 env[1340]: time="2024-07-02T07:48:31.063772935Z" level=info msg="CreateContainer within sandbox \"19ed61e6f31a72aca9011d53f4ad1645cde81454d68ee1b43aa664fe29b533a5\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f4857d56a7b221f642dcb6e236dd6a93159702f231145171b93acbe2b74045e6\"" Jul 2 07:48:31.064725 env[1340]: time="2024-07-02T07:48:31.064575138Z" level=info msg="StartContainer for \"f4857d56a7b221f642dcb6e236dd6a93159702f231145171b93acbe2b74045e6\"" Jul 2 07:48:31.083289 kubelet[1679]: E0702 07:48:31.083248 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:31.145855 env[1340]: time="2024-07-02T07:48:31.145776978Z" level=info msg="StartContainer for \"f4857d56a7b221f642dcb6e236dd6a93159702f231145171b93acbe2b74045e6\" returns successfully" Jul 2 07:48:32.084021 kubelet[1679]: E0702 07:48:32.083956 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:33.085152 kubelet[1679]: E0702 07:48:33.085086 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:34.086010 kubelet[1679]: E0702 07:48:34.085938 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:35.086300 kubelet[1679]: E0702 07:48:35.086232 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:36.086464 kubelet[1679]: E0702 07:48:36.086398 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:37.087536 kubelet[1679]: E0702 07:48:37.087454 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:38.088728 kubelet[1679]: E0702 07:48:38.088663 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:39.089263 kubelet[1679]: E0702 07:48:39.089195 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:40.089698 kubelet[1679]: E0702 07:48:40.089627 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:40.886494 kubelet[1679]: I0702 07:48:40.886409 1679 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.758245775 podCreationTimestamp="2024-07-02 07:48:25 +0000 UTC" firstStartedPulling="2024-07-02 07:48:25.907743967 +0000 UTC m=+36.315337649" lastFinishedPulling="2024-07-02 07:48:31.035793096 +0000 UTC m=+41.443386770" observedRunningTime="2024-07-02 07:48:31.443381194 +0000 UTC m=+41.850974894" watchObservedRunningTime="2024-07-02 07:48:40.886294896 +0000 UTC m=+51.293888592" Jul 2 07:48:40.886824 kubelet[1679]: I0702 07:48:40.886673 1679 topology_manager.go:215] "Topology Admit Handler" podUID="02655d05-410b-4229-886c-6e4526b1033a" podNamespace="default" podName="test-pod-1" Jul 2 07:48:40.951230 kubelet[1679]: I0702 07:48:40.951181 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-910e3a9b-a335-4b48-a489-136559ec19d2\" (UniqueName: \"kubernetes.io/nfs/02655d05-410b-4229-886c-6e4526b1033a-pvc-910e3a9b-a335-4b48-a489-136559ec19d2\") pod \"test-pod-1\" (UID: \"02655d05-410b-4229-886c-6e4526b1033a\") " pod="default/test-pod-1" Jul 2 07:48:40.951453 kubelet[1679]: I0702 07:48:40.951253 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2ftd\" (UniqueName: \"kubernetes.io/projected/02655d05-410b-4229-886c-6e4526b1033a-kube-api-access-t2ftd\") pod \"test-pod-1\" (UID: \"02655d05-410b-4229-886c-6e4526b1033a\") " pod="default/test-pod-1" Jul 2 07:48:41.089994 kubelet[1679]: E0702 07:48:41.089904 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:41.097552 kernel: FS-Cache: Loaded Jul 2 07:48:41.158220 kernel: RPC: Registered named UNIX socket transport module. Jul 2 07:48:41.158407 kernel: RPC: Registered udp transport module. Jul 2 07:48:41.158452 kernel: RPC: Registered tcp transport module. Jul 2 07:48:41.163028 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 2 07:48:41.249574 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 2 07:48:41.476023 kernel: NFS: Registering the id_resolver key type Jul 2 07:48:41.476217 kernel: Key type id_resolver registered Jul 2 07:48:41.476263 kernel: Key type id_legacy registered Jul 2 07:48:41.531406 nfsidmap[2971]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Jul 2 07:48:41.540170 nfsidmap[2972]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Jul 2 07:48:41.791697 env[1340]: time="2024-07-02T07:48:41.791545902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:02655d05-410b-4229-886c-6e4526b1033a,Namespace:default,Attempt:0,}" Jul 2 07:48:41.840065 systemd-networkd[1087]: lxc8e25b1e589b0: Link UP Jul 2 07:48:41.849625 kernel: eth0: renamed from tmp16327 Jul 2 07:48:41.862170 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:48:41.870609 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8e25b1e589b0: link becomes ready Jul 2 07:48:41.870808 systemd-networkd[1087]: lxc8e25b1e589b0: Gained carrier Jul 2 07:48:42.090817 kubelet[1679]: E0702 07:48:42.090668 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:42.116368 env[1340]: time="2024-07-02T07:48:42.116272424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:42.116613 env[1340]: time="2024-07-02T07:48:42.116324207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:42.116613 env[1340]: time="2024-07-02T07:48:42.116342728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:42.116613 env[1340]: time="2024-07-02T07:48:42.116568476Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/163272c3e1495297f3f8a54417233ee4df2be43eb32db75679badbe72f74a950 pid=2995 runtime=io.containerd.runc.v2 Jul 2 07:48:42.152246 systemd[1]: run-containerd-runc-k8s.io-163272c3e1495297f3f8a54417233ee4df2be43eb32db75679badbe72f74a950-runc.Jdt4S8.mount: Deactivated successfully. Jul 2 07:48:42.209406 env[1340]: time="2024-07-02T07:48:42.209356078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:02655d05-410b-4229-886c-6e4526b1033a,Namespace:default,Attempt:0,} returns sandbox id \"163272c3e1495297f3f8a54417233ee4df2be43eb32db75679badbe72f74a950\"" Jul 2 07:48:42.211973 env[1340]: time="2024-07-02T07:48:42.211930791Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 07:48:42.418810 env[1340]: time="2024-07-02T07:48:42.418738820Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:42.421276 env[1340]: time="2024-07-02T07:48:42.421225188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:42.423744 env[1340]: time="2024-07-02T07:48:42.423701290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:42.426033 env[1340]: time="2024-07-02T07:48:42.425994034Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:42.426971 env[1340]: time="2024-07-02T07:48:42.426921869Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 07:48:42.430125 env[1340]: time="2024-07-02T07:48:42.430079951Z" level=info msg="CreateContainer within sandbox \"163272c3e1495297f3f8a54417233ee4df2be43eb32db75679badbe72f74a950\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 2 07:48:42.449663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123719892.mount: Deactivated successfully. Jul 2 07:48:42.459059 env[1340]: time="2024-07-02T07:48:42.459009301Z" level=info msg="CreateContainer within sandbox \"163272c3e1495297f3f8a54417233ee4df2be43eb32db75679badbe72f74a950\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"14d6f9514a5a00e6b53e5121333ee909c71d0f0544f23e6a4161b643ddbbb72f\"" Jul 2 07:48:42.459866 env[1340]: time="2024-07-02T07:48:42.459809898Z" level=info msg="StartContainer for \"14d6f9514a5a00e6b53e5121333ee909c71d0f0544f23e6a4161b643ddbbb72f\"" Jul 2 07:48:42.527410 env[1340]: time="2024-07-02T07:48:42.526166062Z" level=info msg="StartContainer for \"14d6f9514a5a00e6b53e5121333ee909c71d0f0544f23e6a4161b643ddbbb72f\" returns successfully" Jul 2 07:48:43.091788 kubelet[1679]: E0702 07:48:43.091720 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:43.483208 kubelet[1679]: I0702 07:48:43.483163 1679 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.267019126 podCreationTimestamp="2024-07-02 07:48:25 +0000 UTC" firstStartedPulling="2024-07-02 07:48:42.211204607 +0000 UTC m=+52.618798298" lastFinishedPulling="2024-07-02 07:48:42.427298047 +0000 UTC m=+52.834891726" observedRunningTime="2024-07-02 07:48:43.483005201 +0000 UTC m=+53.890598900" watchObservedRunningTime="2024-07-02 07:48:43.483112554 +0000 UTC m=+53.890706251" Jul 2 07:48:43.903926 systemd-networkd[1087]: lxc8e25b1e589b0: Gained IPv6LL Jul 2 07:48:44.092280 kubelet[1679]: E0702 07:48:44.092216 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:45.092673 kubelet[1679]: E0702 07:48:45.092608 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:46.093752 kubelet[1679]: E0702 07:48:46.093683 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:46.245275 systemd[1]: run-containerd-runc-k8s.io-ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea-runc.B0BDWl.mount: Deactivated successfully. Jul 2 07:48:46.260788 env[1340]: time="2024-07-02T07:48:46.260496564Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:48:46.267091 env[1340]: time="2024-07-02T07:48:46.267044766Z" level=info msg="StopContainer for \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\" with timeout 2 (s)" Jul 2 07:48:46.267537 env[1340]: time="2024-07-02T07:48:46.267482895Z" level=info msg="Stop container \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\" with signal terminated" Jul 2 07:48:46.277948 systemd-networkd[1087]: lxc_health: Link DOWN Jul 2 07:48:46.277959 systemd-networkd[1087]: lxc_health: Lost carrier Jul 2 07:48:46.324657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea-rootfs.mount: Deactivated successfully. Jul 2 07:48:47.094394 kubelet[1679]: E0702 07:48:47.094329 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:48.064139 env[1340]: time="2024-07-02T07:48:48.063821051Z" level=info msg="shim disconnected" id=ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea Jul 2 07:48:48.064139 env[1340]: time="2024-07-02T07:48:48.063883038Z" level=warning msg="cleaning up after shim disconnected" id=ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea namespace=k8s.io Jul 2 07:48:48.064139 env[1340]: time="2024-07-02T07:48:48.063900288Z" level=info msg="cleaning up dead shim" Jul 2 07:48:48.076276 env[1340]: time="2024-07-02T07:48:48.076193075Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3129 runtime=io.containerd.runc.v2\n" Jul 2 07:48:48.079088 env[1340]: time="2024-07-02T07:48:48.079023413Z" level=info msg="StopContainer for \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\" returns successfully" Jul 2 07:48:48.079879 env[1340]: time="2024-07-02T07:48:48.079824163Z" level=info msg="StopPodSandbox for \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\"" Jul 2 07:48:48.080023 env[1340]: time="2024-07-02T07:48:48.079923825Z" level=info msg="Container to stop \"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:48:48.080023 env[1340]: time="2024-07-02T07:48:48.079950673Z" level=info msg="Container to stop \"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:48:48.080023 env[1340]: time="2024-07-02T07:48:48.079968909Z" level=info msg="Container to stop \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:48:48.080023 env[1340]: time="2024-07-02T07:48:48.079987110Z" level=info msg="Container to stop \"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:48:48.080023 env[1340]: time="2024-07-02T07:48:48.080004770Z" level=info msg="Container to stop \"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:48:48.084285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e-shm.mount: Deactivated successfully. Jul 2 07:48:48.095423 kubelet[1679]: E0702 07:48:48.095383 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:48.119757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e-rootfs.mount: Deactivated successfully. Jul 2 07:48:48.123461 env[1340]: time="2024-07-02T07:48:48.123401843Z" level=info msg="shim disconnected" id=12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e Jul 2 07:48:48.123662 env[1340]: time="2024-07-02T07:48:48.123468871Z" level=warning msg="cleaning up after shim disconnected" id=12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e namespace=k8s.io Jul 2 07:48:48.123662 env[1340]: time="2024-07-02T07:48:48.123484870Z" level=info msg="cleaning up dead shim" Jul 2 07:48:48.134942 env[1340]: time="2024-07-02T07:48:48.134888767Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3160 runtime=io.containerd.runc.v2\n" Jul 2 07:48:48.135530 env[1340]: time="2024-07-02T07:48:48.135476373Z" level=info msg="TearDown network for sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" successfully" Jul 2 07:48:48.135720 env[1340]: time="2024-07-02T07:48:48.135669522Z" level=info msg="StopPodSandbox for \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" returns successfully" Jul 2 07:48:48.209430 kubelet[1679]: I0702 07:48:48.209369 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-host-proc-sys-net\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.209430 kubelet[1679]: I0702 07:48:48.209432 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-run\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.209765 kubelet[1679]: I0702 07:48:48.209461 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-hostproc\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.209765 kubelet[1679]: I0702 07:48:48.209505 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-bpf-maps\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.209765 kubelet[1679]: I0702 07:48:48.209560 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-etc-cni-netd\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.209765 kubelet[1679]: I0702 07:48:48.209585 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-lib-modules\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.209765 kubelet[1679]: I0702 07:48:48.209616 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-xtables-lock\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.209765 kubelet[1679]: I0702 07:48:48.209644 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-host-proc-sys-kernel\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.210094 kubelet[1679]: I0702 07:48:48.209683 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/135bb32f-38a8-415e-ad1a-a0431dad4085-hubble-tls\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.210094 kubelet[1679]: I0702 07:48:48.209715 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cni-path\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.210094 kubelet[1679]: I0702 07:48:48.209754 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-config-path\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.210094 kubelet[1679]: I0702 07:48:48.209794 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/135bb32f-38a8-415e-ad1a-a0431dad4085-clustermesh-secrets\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.210094 kubelet[1679]: I0702 07:48:48.209827 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-cgroup\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.210094 kubelet[1679]: I0702 07:48:48.209873 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kxc7\" (UniqueName: \"kubernetes.io/projected/135bb32f-38a8-415e-ad1a-a0431dad4085-kube-api-access-7kxc7\") pod \"135bb32f-38a8-415e-ad1a-a0431dad4085\" (UID: \"135bb32f-38a8-415e-ad1a-a0431dad4085\") " Jul 2 07:48:48.211092 kubelet[1679]: I0702 07:48:48.210486 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:48.211092 kubelet[1679]: I0702 07:48:48.210619 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:48.211092 kubelet[1679]: I0702 07:48:48.210652 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:48.211092 kubelet[1679]: I0702 07:48:48.210718 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:48.211612 kubelet[1679]: I0702 07:48:48.211478 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-hostproc" (OuterVolumeSpecName: "hostproc") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:48.211612 kubelet[1679]: I0702 07:48:48.211571 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:48.211875 kubelet[1679]: I0702 07:48:48.211815 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:48.212033 kubelet[1679]: I0702 07:48:48.211979 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:48.220674 kubelet[1679]: I0702 07:48:48.220618 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cni-path" (OuterVolumeSpecName: "cni-path") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:48.221393 systemd[1]: var-lib-kubelet-pods-135bb32f\x2d38a8\x2d415e\x2dad1a\x2da0431dad4085-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7kxc7.mount: Deactivated successfully. Jul 2 07:48:48.230762 kubelet[1679]: I0702 07:48:48.222384 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:48:48.230762 kubelet[1679]: I0702 07:48:48.227397 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135bb32f-38a8-415e-ad1a-a0431dad4085-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:48:48.230762 kubelet[1679]: I0702 07:48:48.227613 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135bb32f-38a8-415e-ad1a-a0431dad4085-kube-api-access-7kxc7" (OuterVolumeSpecName: "kube-api-access-7kxc7") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "kube-api-access-7kxc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:48:48.230762 kubelet[1679]: I0702 07:48:48.227676 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:48.230762 kubelet[1679]: I0702 07:48:48.227752 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135bb32f-38a8-415e-ad1a-a0431dad4085-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "135bb32f-38a8-415e-ad1a-a0431dad4085" (UID: "135bb32f-38a8-415e-ad1a-a0431dad4085"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:48:48.228961 systemd[1]: var-lib-kubelet-pods-135bb32f\x2d38a8\x2d415e\x2dad1a\x2da0431dad4085-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:48:48.234418 systemd[1]: var-lib-kubelet-pods-135bb32f\x2d38a8\x2d415e\x2dad1a\x2da0431dad4085-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:48:48.310930 kubelet[1679]: I0702 07:48:48.310878 1679 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-hostproc\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.310930 kubelet[1679]: I0702 07:48:48.310933 1679 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-bpf-maps\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311235 kubelet[1679]: I0702 07:48:48.310952 1679 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-etc-cni-netd\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311235 kubelet[1679]: I0702 07:48:48.310969 1679 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-host-proc-sys-net\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311235 kubelet[1679]: I0702 07:48:48.310986 1679 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-run\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311235 kubelet[1679]: I0702 07:48:48.311003 1679 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-xtables-lock\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311235 kubelet[1679]: I0702 07:48:48.311017 1679 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-lib-modules\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311235 kubelet[1679]: I0702 07:48:48.311033 1679 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-host-proc-sys-kernel\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311235 kubelet[1679]: I0702 07:48:48.311047 1679 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/135bb32f-38a8-415e-ad1a-a0431dad4085-hubble-tls\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311235 kubelet[1679]: I0702 07:48:48.311104 1679 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/135bb32f-38a8-415e-ad1a-a0431dad4085-clustermesh-secrets\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311528 kubelet[1679]: I0702 07:48:48.311121 1679 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-cgroup\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311528 kubelet[1679]: I0702 07:48:48.311136 1679 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7kxc7\" (UniqueName: \"kubernetes.io/projected/135bb32f-38a8-415e-ad1a-a0431dad4085-kube-api-access-7kxc7\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311528 kubelet[1679]: I0702 07:48:48.311151 1679 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/135bb32f-38a8-415e-ad1a-a0431dad4085-cni-path\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.311528 kubelet[1679]: I0702 07:48:48.311169 1679 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/135bb32f-38a8-415e-ad1a-a0431dad4085-cilium-config-path\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:48.478905 kubelet[1679]: I0702 07:48:48.478867 1679 scope.go:117] "RemoveContainer" containerID="ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea" Jul 2 07:48:48.481641 env[1340]: time="2024-07-02T07:48:48.481590474Z" level=info msg="RemoveContainer for \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\"" Jul 2 07:48:48.488219 env[1340]: time="2024-07-02T07:48:48.488149499Z" level=info msg="RemoveContainer for \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\" returns successfully" Jul 2 07:48:48.488627 kubelet[1679]: I0702 07:48:48.488594 1679 scope.go:117] "RemoveContainer" containerID="eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3" Jul 2 07:48:48.490250 env[1340]: time="2024-07-02T07:48:48.490207911Z" level=info msg="RemoveContainer for \"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3\"" Jul 2 07:48:48.494408 env[1340]: time="2024-07-02T07:48:48.494350757Z" level=info msg="RemoveContainer for \"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3\" returns successfully" Jul 2 07:48:48.494655 kubelet[1679]: I0702 07:48:48.494607 1679 scope.go:117] "RemoveContainer" containerID="3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b" Jul 2 07:48:48.496112 env[1340]: time="2024-07-02T07:48:48.496069483Z" level=info msg="RemoveContainer for \"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b\"" Jul 2 07:48:48.500001 env[1340]: time="2024-07-02T07:48:48.499951007Z" level=info msg="RemoveContainer for \"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b\" returns successfully" Jul 2 07:48:48.500230 kubelet[1679]: I0702 07:48:48.500185 1679 scope.go:117] "RemoveContainer" containerID="455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880" Jul 2 07:48:48.501886 env[1340]: time="2024-07-02T07:48:48.501850086Z" level=info msg="RemoveContainer for \"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880\"" Jul 2 07:48:48.505688 env[1340]: time="2024-07-02T07:48:48.505644990Z" level=info msg="RemoveContainer for \"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880\" returns successfully" Jul 2 07:48:48.505951 kubelet[1679]: I0702 07:48:48.505906 1679 scope.go:117] "RemoveContainer" containerID="a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f" Jul 2 07:48:48.507331 env[1340]: time="2024-07-02T07:48:48.507286747Z" level=info msg="RemoveContainer for \"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f\"" Jul 2 07:48:48.511359 env[1340]: time="2024-07-02T07:48:48.511301646Z" level=info msg="RemoveContainer for \"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f\" returns successfully" Jul 2 07:48:48.511578 kubelet[1679]: I0702 07:48:48.511552 1679 scope.go:117] "RemoveContainer" containerID="ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea" Jul 2 07:48:48.511925 env[1340]: time="2024-07-02T07:48:48.511831994Z" level=error msg="ContainerStatus for \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\": not found" Jul 2 07:48:48.512229 kubelet[1679]: E0702 07:48:48.512208 1679 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\": not found" containerID="ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea" Jul 2 07:48:48.512352 kubelet[1679]: I0702 07:48:48.512327 1679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea"} err="failed to get container status \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac6aad8c15589e207be83d06b38c65a980dd8a4b80226f6ca0b2469a632ac4ea\": not found" Jul 2 07:48:48.512352 kubelet[1679]: I0702 07:48:48.512352 1679 scope.go:117] "RemoveContainer" containerID="eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3" Jul 2 07:48:48.512753 env[1340]: time="2024-07-02T07:48:48.512668241Z" level=error msg="ContainerStatus for \"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3\": not found" Jul 2 07:48:48.512986 kubelet[1679]: E0702 07:48:48.512950 1679 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3\": not found" containerID="eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3" Jul 2 07:48:48.513155 kubelet[1679]: I0702 07:48:48.512993 1679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3"} err="failed to get container status \"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb1af96324254b46714d4bc02bd83fd3c28e1a08e79e9275673f3333ee24a5b3\": not found" Jul 2 07:48:48.513155 kubelet[1679]: I0702 07:48:48.513012 1679 scope.go:117] "RemoveContainer" containerID="3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b" Jul 2 07:48:48.513463 env[1340]: time="2024-07-02T07:48:48.513394231Z" level=error msg="ContainerStatus for \"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b\": not found" Jul 2 07:48:48.513684 kubelet[1679]: E0702 07:48:48.513665 1679 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b\": not found" containerID="3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b" Jul 2 07:48:48.513789 kubelet[1679]: I0702 07:48:48.513711 1679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b"} err="failed to get container status \"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c16d20e9d6554c95d6461f1423d1362228327710fcf46d6aa2a6308711a466b\": not found" Jul 2 07:48:48.513789 kubelet[1679]: I0702 07:48:48.513729 1679 scope.go:117] "RemoveContainer" containerID="455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880" Jul 2 07:48:48.514016 env[1340]: time="2024-07-02T07:48:48.513935023Z" level=error msg="ContainerStatus for \"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880\": not found" Jul 2 07:48:48.514249 kubelet[1679]: E0702 07:48:48.514216 1679 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880\": not found" containerID="455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880" Jul 2 07:48:48.514352 kubelet[1679]: I0702 07:48:48.514258 1679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880"} err="failed to get container status \"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880\": rpc error: code = NotFound desc = an error occurred when try to find container \"455a3f613a01ffa183f314ab8661cffb6af4efb6398a9ce41008e35c05007880\": not found" Jul 2 07:48:48.514352 kubelet[1679]: I0702 07:48:48.514274 1679 scope.go:117] "RemoveContainer" containerID="a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f" Jul 2 07:48:48.514579 env[1340]: time="2024-07-02T07:48:48.514490793Z" level=error msg="ContainerStatus for \"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f\": not found" Jul 2 07:48:48.514783 kubelet[1679]: E0702 07:48:48.514736 1679 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f\": not found" containerID="a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f" Jul 2 07:48:48.514783 kubelet[1679]: I0702 07:48:48.514776 1679 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f"} err="failed to get container status \"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a75efaaa73017ef47e4afc3721bc246b29c501ebd2ae0ce364a72f8a0f184e5f\": not found" Jul 2 07:48:49.095786 kubelet[1679]: E0702 07:48:49.095717 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:49.426915 kubelet[1679]: I0702 07:48:49.426857 1679 topology_manager.go:215] "Topology Admit Handler" podUID="84ed0be2-fc67-4033-95f1-2c8bdf63ed3e" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-wwrrj" Jul 2 07:48:49.427115 kubelet[1679]: E0702 07:48:49.426930 1679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="135bb32f-38a8-415e-ad1a-a0431dad4085" containerName="cilium-agent" Jul 2 07:48:49.427115 kubelet[1679]: E0702 07:48:49.426949 1679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="135bb32f-38a8-415e-ad1a-a0431dad4085" containerName="mount-cgroup" Jul 2 07:48:49.427115 kubelet[1679]: E0702 07:48:49.426960 1679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="135bb32f-38a8-415e-ad1a-a0431dad4085" containerName="clean-cilium-state" Jul 2 07:48:49.427115 kubelet[1679]: E0702 07:48:49.426971 1679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="135bb32f-38a8-415e-ad1a-a0431dad4085" containerName="apply-sysctl-overwrites" Jul 2 07:48:49.427115 kubelet[1679]: E0702 07:48:49.426982 1679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="135bb32f-38a8-415e-ad1a-a0431dad4085" containerName="mount-bpf-fs" Jul 2 07:48:49.427115 kubelet[1679]: I0702 07:48:49.427010 1679 memory_manager.go:346] "RemoveStaleState removing state" podUID="135bb32f-38a8-415e-ad1a-a0431dad4085" containerName="cilium-agent" Jul 2 07:48:49.436352 kubelet[1679]: I0702 07:48:49.436287 1679 topology_manager.go:215] "Topology Admit Handler" podUID="27a2092f-ca1e-4770-b5e2-49ba7a5f532b" podNamespace="kube-system" podName="cilium-mp6dx" Jul 2 07:48:49.518297 kubelet[1679]: I0702 07:48:49.518243 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-xtables-lock\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.518297 kubelet[1679]: I0702 07:48:49.518307 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-host-proc-sys-kernel\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.518613 kubelet[1679]: I0702 07:48:49.518340 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-hostproc\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.518613 kubelet[1679]: I0702 07:48:49.518370 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-host-proc-sys-net\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.518613 kubelet[1679]: I0702 07:48:49.518401 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-hubble-tls\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.518613 kubelet[1679]: I0702 07:48:49.518438 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-clustermesh-secrets\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.518613 kubelet[1679]: I0702 07:48:49.518483 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-config-path\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.518613 kubelet[1679]: I0702 07:48:49.518538 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-ipsec-secrets\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.518949 kubelet[1679]: I0702 07:48:49.518606 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh2nj\" (UniqueName: \"kubernetes.io/projected/84ed0be2-fc67-4033-95f1-2c8bdf63ed3e-kube-api-access-hh2nj\") pod \"cilium-operator-6bc8ccdb58-wwrrj\" (UID: \"84ed0be2-fc67-4033-95f1-2c8bdf63ed3e\") " pod="kube-system/cilium-operator-6bc8ccdb58-wwrrj" Jul 2 07:48:49.518949 kubelet[1679]: I0702 07:48:49.518642 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cni-path\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.518949 kubelet[1679]: I0702 07:48:49.518683 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-etc-cni-netd\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.518949 kubelet[1679]: I0702 07:48:49.518719 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5ztf\" (UniqueName: \"kubernetes.io/projected/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-kube-api-access-l5ztf\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.518949 kubelet[1679]: I0702 07:48:49.518751 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-bpf-maps\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.519223 kubelet[1679]: I0702 07:48:49.518788 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-cgroup\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.519223 kubelet[1679]: I0702 07:48:49.518822 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-lib-modules\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.519223 kubelet[1679]: I0702 07:48:49.518871 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-run\") pod \"cilium-mp6dx\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " pod="kube-system/cilium-mp6dx" Jul 2 07:48:49.519223 kubelet[1679]: I0702 07:48:49.518921 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84ed0be2-fc67-4033-95f1-2c8bdf63ed3e-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-wwrrj\" (UID: \"84ed0be2-fc67-4033-95f1-2c8bdf63ed3e\") " pod="kube-system/cilium-operator-6bc8ccdb58-wwrrj" Jul 2 07:48:49.731451 env[1340]: time="2024-07-02T07:48:49.731273622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-wwrrj,Uid:84ed0be2-fc67-4033-95f1-2c8bdf63ed3e,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:49.740271 env[1340]: time="2024-07-02T07:48:49.740213758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mp6dx,Uid:27a2092f-ca1e-4770-b5e2-49ba7a5f532b,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:49.757026 env[1340]: time="2024-07-02T07:48:49.755214675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:49.757026 env[1340]: time="2024-07-02T07:48:49.755263769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:49.757026 env[1340]: time="2024-07-02T07:48:49.755281903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:49.757026 env[1340]: time="2024-07-02T07:48:49.755535396Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c0c882deabecd01a777d6de62a67f378f4b4fd22dcdf53b3b00a759f2cbe93b pid=3189 runtime=io.containerd.runc.v2 Jul 2 07:48:49.765196 env[1340]: time="2024-07-02T07:48:49.765113930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:49.765378 env[1340]: time="2024-07-02T07:48:49.765211040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:49.765378 env[1340]: time="2024-07-02T07:48:49.765250375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:49.765544 env[1340]: time="2024-07-02T07:48:49.765452990Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e58bbf2f90386815f785b2b95666f2db4e7b47be8ca889d7799f5de003014f7a pid=3209 runtime=io.containerd.runc.v2 Jul 2 07:48:49.849021 env[1340]: time="2024-07-02T07:48:49.848967997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mp6dx,Uid:27a2092f-ca1e-4770-b5e2-49ba7a5f532b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e58bbf2f90386815f785b2b95666f2db4e7b47be8ca889d7799f5de003014f7a\"" Jul 2 07:48:49.852929 env[1340]: time="2024-07-02T07:48:49.852843497Z" level=info msg="CreateContainer within sandbox \"e58bbf2f90386815f785b2b95666f2db4e7b47be8ca889d7799f5de003014f7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:48:49.869370 env[1340]: time="2024-07-02T07:48:49.869306418Z" level=info msg="CreateContainer within sandbox \"e58bbf2f90386815f785b2b95666f2db4e7b47be8ca889d7799f5de003014f7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a8aa9f2a5d637718d195c51addf923c087cc1b0039675d6317888b5d74cce9f1\"" Jul 2 07:48:49.873883 env[1340]: time="2024-07-02T07:48:49.873827350Z" level=info msg="StartContainer for \"a8aa9f2a5d637718d195c51addf923c087cc1b0039675d6317888b5d74cce9f1\"" Jul 2 07:48:49.891042 env[1340]: time="2024-07-02T07:48:49.890983095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-wwrrj,Uid:84ed0be2-fc67-4033-95f1-2c8bdf63ed3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c0c882deabecd01a777d6de62a67f378f4b4fd22dcdf53b3b00a759f2cbe93b\"" Jul 2 07:48:49.896779 env[1340]: time="2024-07-02T07:48:49.896699065Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:48:49.954575 env[1340]: time="2024-07-02T07:48:49.953225489Z" level=info msg="StartContainer for \"a8aa9f2a5d637718d195c51addf923c087cc1b0039675d6317888b5d74cce9f1\" returns successfully" Jul 2 07:48:49.995309 env[1340]: time="2024-07-02T07:48:49.995142573Z" level=info msg="shim disconnected" id=a8aa9f2a5d637718d195c51addf923c087cc1b0039675d6317888b5d74cce9f1 Jul 2 07:48:49.995309 env[1340]: time="2024-07-02T07:48:49.995206327Z" level=warning msg="cleaning up after shim disconnected" id=a8aa9f2a5d637718d195c51addf923c087cc1b0039675d6317888b5d74cce9f1 namespace=k8s.io Jul 2 07:48:49.995309 env[1340]: time="2024-07-02T07:48:49.995221822Z" level=info msg="cleaning up dead shim" Jul 2 07:48:50.007586 env[1340]: time="2024-07-02T07:48:50.007531802Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3316 runtime=io.containerd.runc.v2\n" Jul 2 07:48:50.051688 kubelet[1679]: E0702 07:48:50.051583 1679 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:50.099141 kubelet[1679]: E0702 07:48:50.099104 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:50.100173 env[1340]: time="2024-07-02T07:48:50.100122581Z" level=info msg="StopPodSandbox for \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\"" Jul 2 07:48:50.100320 env[1340]: time="2024-07-02T07:48:50.100237860Z" level=info msg="TearDown network for sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" successfully" Jul 2 07:48:50.100320 env[1340]: time="2024-07-02T07:48:50.100288863Z" level=info msg="StopPodSandbox for \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" returns successfully" Jul 2 07:48:50.100844 env[1340]: time="2024-07-02T07:48:50.100803244Z" level=info msg="RemovePodSandbox for \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\"" Jul 2 07:48:50.100983 env[1340]: time="2024-07-02T07:48:50.100848064Z" level=info msg="Forcibly stopping sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\"" Jul 2 07:48:50.100983 env[1340]: time="2024-07-02T07:48:50.100951159Z" level=info msg="TearDown network for sandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" successfully" Jul 2 07:48:50.105681 env[1340]: time="2024-07-02T07:48:50.105630117Z" level=info msg="RemovePodSandbox \"12066faa53d039df86d1895006a19a020c37608b9c769c81ba54e5088427284e\" returns successfully" Jul 2 07:48:50.196299 kubelet[1679]: E0702 07:48:50.196263 1679 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:48:50.273240 kubelet[1679]: I0702 07:48:50.272418 1679 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="135bb32f-38a8-415e-ad1a-a0431dad4085" path="/var/lib/kubelet/pods/135bb32f-38a8-415e-ad1a-a0431dad4085/volumes" Jul 2 07:48:50.487263 env[1340]: time="2024-07-02T07:48:50.487193153Z" level=info msg="StopPodSandbox for \"e58bbf2f90386815f785b2b95666f2db4e7b47be8ca889d7799f5de003014f7a\"" Jul 2 07:48:50.487528 env[1340]: time="2024-07-02T07:48:50.487270295Z" level=info msg="Container to stop \"a8aa9f2a5d637718d195c51addf923c087cc1b0039675d6317888b5d74cce9f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:48:50.525446 env[1340]: time="2024-07-02T07:48:50.524959949Z" level=info msg="shim disconnected" id=e58bbf2f90386815f785b2b95666f2db4e7b47be8ca889d7799f5de003014f7a Jul 2 07:48:50.525834 env[1340]: time="2024-07-02T07:48:50.525791863Z" level=warning msg="cleaning up after shim disconnected" id=e58bbf2f90386815f785b2b95666f2db4e7b47be8ca889d7799f5de003014f7a namespace=k8s.io Jul 2 07:48:50.525978 env[1340]: time="2024-07-02T07:48:50.525955936Z" level=info msg="cleaning up dead shim" Jul 2 07:48:50.538618 env[1340]: time="2024-07-02T07:48:50.538566934Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3353 runtime=io.containerd.runc.v2\n" Jul 2 07:48:50.539039 env[1340]: time="2024-07-02T07:48:50.538988079Z" level=info msg="TearDown network for sandbox \"e58bbf2f90386815f785b2b95666f2db4e7b47be8ca889d7799f5de003014f7a\" successfully" Jul 2 07:48:50.539039 env[1340]: time="2024-07-02T07:48:50.539024253Z" level=info msg="StopPodSandbox for \"e58bbf2f90386815f785b2b95666f2db4e7b47be8ca889d7799f5de003014f7a\" returns successfully" Jul 2 07:48:50.625384 kubelet[1679]: I0702 07:48:50.625346 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-config-path\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.625685 kubelet[1679]: I0702 07:48:50.625408 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5ztf\" (UniqueName: \"kubernetes.io/projected/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-kube-api-access-l5ztf\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.625685 kubelet[1679]: I0702 07:48:50.625444 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-host-proc-sys-kernel\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.625685 kubelet[1679]: I0702 07:48:50.625490 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-etc-cni-netd\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.625685 kubelet[1679]: I0702 07:48:50.625561 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-bpf-maps\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.625685 kubelet[1679]: I0702 07:48:50.625593 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cni-path\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.625685 kubelet[1679]: I0702 07:48:50.625622 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-run\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.626023 kubelet[1679]: I0702 07:48:50.625652 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-host-proc-sys-net\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.626023 kubelet[1679]: I0702 07:48:50.625687 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-clustermesh-secrets\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.626023 kubelet[1679]: I0702 07:48:50.625718 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-hostproc\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.626023 kubelet[1679]: I0702 07:48:50.625756 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-ipsec-secrets\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.626023 kubelet[1679]: I0702 07:48:50.625787 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-cgroup\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.626023 kubelet[1679]: I0702 07:48:50.625822 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-xtables-lock\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.626339 kubelet[1679]: I0702 07:48:50.625855 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-hubble-tls\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.626339 kubelet[1679]: I0702 07:48:50.625897 1679 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-lib-modules\") pod \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\" (UID: \"27a2092f-ca1e-4770-b5e2-49ba7a5f532b\") " Jul 2 07:48:50.626339 kubelet[1679]: I0702 07:48:50.625975 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:50.640709 kubelet[1679]: I0702 07:48:50.629137 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:48:50.640709 kubelet[1679]: I0702 07:48:50.634158 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-kube-api-access-l5ztf" (OuterVolumeSpecName: "kube-api-access-l5ztf") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "kube-api-access-l5ztf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:48:50.640709 kubelet[1679]: I0702 07:48:50.634215 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:50.640709 kubelet[1679]: I0702 07:48:50.634245 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:50.640709 kubelet[1679]: I0702 07:48:50.634275 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:50.641114 kubelet[1679]: I0702 07:48:50.634302 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cni-path" (OuterVolumeSpecName: "cni-path") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:50.641114 kubelet[1679]: I0702 07:48:50.634332 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:50.641114 kubelet[1679]: I0702 07:48:50.634360 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:50.641114 kubelet[1679]: I0702 07:48:50.640106 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:50.641114 kubelet[1679]: I0702 07:48:50.640186 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-hostproc" (OuterVolumeSpecName: "hostproc") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:50.641389 kubelet[1679]: I0702 07:48:50.640184 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:48:50.652096 systemd[1]: var-lib-kubelet-pods-27a2092f\x2dca1e\x2d4770\x2db5e2\x2d49ba7a5f532b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl5ztf.mount: Deactivated successfully. Jul 2 07:48:50.655362 kubelet[1679]: I0702 07:48:50.654598 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:48:50.655362 kubelet[1679]: I0702 07:48:50.654644 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:48:50.655834 systemd[1]: var-lib-kubelet-pods-27a2092f\x2dca1e\x2d4770\x2db5e2\x2d49ba7a5f532b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:48:50.670886 kubelet[1679]: I0702 07:48:50.667948 1679 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "27a2092f-ca1e-4770-b5e2-49ba7a5f532b" (UID: "27a2092f-ca1e-4770-b5e2-49ba7a5f532b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:48:50.665650 systemd[1]: var-lib-kubelet-pods-27a2092f\x2dca1e\x2d4770\x2db5e2\x2d49ba7a5f532b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 07:48:50.665934 systemd[1]: var-lib-kubelet-pods-27a2092f\x2dca1e\x2d4770\x2db5e2\x2d49ba7a5f532b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:48:50.711353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780934030.mount: Deactivated successfully. Jul 2 07:48:50.726397 kubelet[1679]: I0702 07:48:50.726351 1679 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-host-proc-sys-net\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.726397 kubelet[1679]: I0702 07:48:50.726400 1679 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-clustermesh-secrets\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.726669 kubelet[1679]: I0702 07:48:50.726422 1679 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cni-path\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.726669 kubelet[1679]: I0702 07:48:50.726477 1679 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-run\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.726669 kubelet[1679]: I0702 07:48:50.726494 1679 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-hostproc\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.726669 kubelet[1679]: I0702 07:48:50.726545 1679 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-xtables-lock\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.726669 kubelet[1679]: I0702 07:48:50.726564 1679 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-ipsec-secrets\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.726669 kubelet[1679]: I0702 07:48:50.726579 1679 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-cgroup\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.726669 kubelet[1679]: I0702 07:48:50.726601 1679 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-hubble-tls\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.726669 kubelet[1679]: I0702 07:48:50.726619 1679 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-lib-modules\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.727014 kubelet[1679]: I0702 07:48:50.726637 1679 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-cilium-config-path\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.727014 kubelet[1679]: I0702 07:48:50.726658 1679 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l5ztf\" (UniqueName: \"kubernetes.io/projected/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-kube-api-access-l5ztf\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.727014 kubelet[1679]: I0702 07:48:50.726675 1679 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-host-proc-sys-kernel\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.727014 kubelet[1679]: I0702 07:48:50.726691 1679 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-etc-cni-netd\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:50.727014 kubelet[1679]: I0702 07:48:50.726709 1679 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/27a2092f-ca1e-4770-b5e2-49ba7a5f532b-bpf-maps\") on node \"10.128.0.9\" DevicePath \"\"" Jul 2 07:48:51.099692 kubelet[1679]: E0702 07:48:51.099588 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:51.495583 kubelet[1679]: I0702 07:48:51.495552 1679 scope.go:117] "RemoveContainer" containerID="a8aa9f2a5d637718d195c51addf923c087cc1b0039675d6317888b5d74cce9f1" Jul 2 07:48:51.500940 env[1340]: time="2024-07-02T07:48:51.500892539Z" level=info msg="RemoveContainer for \"a8aa9f2a5d637718d195c51addf923c087cc1b0039675d6317888b5d74cce9f1\"" Jul 2 07:48:51.511023 env[1340]: time="2024-07-02T07:48:51.510972009Z" level=info msg="RemoveContainer for \"a8aa9f2a5d637718d195c51addf923c087cc1b0039675d6317888b5d74cce9f1\" returns successfully" Jul 2 07:48:51.533123 kubelet[1679]: I0702 07:48:51.533079 1679 topology_manager.go:215] "Topology Admit Handler" podUID="9f5ef33d-8fb0-44db-9a98-97437eecba09" podNamespace="kube-system" podName="cilium-t8zgj" Jul 2 07:48:51.533442 kubelet[1679]: E0702 07:48:51.533414 1679 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27a2092f-ca1e-4770-b5e2-49ba7a5f532b" containerName="mount-cgroup" Jul 2 07:48:51.533628 kubelet[1679]: I0702 07:48:51.533612 1679 memory_manager.go:346] "RemoveStaleState removing state" podUID="27a2092f-ca1e-4770-b5e2-49ba7a5f532b" containerName="mount-cgroup" Jul 2 07:48:51.581258 env[1340]: time="2024-07-02T07:48:51.581186078Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:51.584199 env[1340]: time="2024-07-02T07:48:51.584139395Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:51.586653 env[1340]: time="2024-07-02T07:48:51.586606375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:48:51.587372 env[1340]: time="2024-07-02T07:48:51.587319481Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:48:51.590803 env[1340]: time="2024-07-02T07:48:51.590721785Z" level=info msg="CreateContainer within sandbox \"3c0c882deabecd01a777d6de62a67f378f4b4fd22dcdf53b3b00a759f2cbe93b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:48:51.615276 env[1340]: time="2024-07-02T07:48:51.615186775Z" level=info msg="CreateContainer within sandbox \"3c0c882deabecd01a777d6de62a67f378f4b4fd22dcdf53b3b00a759f2cbe93b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"226362f67ccee32262d97c10d4805b5915c7cc23f475fe6327ec536a10212df2\"" Jul 2 07:48:51.616234 env[1340]: time="2024-07-02T07:48:51.616182063Z" level=info msg="StartContainer for \"226362f67ccee32262d97c10d4805b5915c7cc23f475fe6327ec536a10212df2\"" Jul 2 07:48:51.638611 kubelet[1679]: I0702 07:48:51.638076 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f5ef33d-8fb0-44db-9a98-97437eecba09-xtables-lock\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.638611 kubelet[1679]: I0702 07:48:51.638307 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f5ef33d-8fb0-44db-9a98-97437eecba09-cilium-config-path\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.638611 kubelet[1679]: I0702 07:48:51.638374 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f5ef33d-8fb0-44db-9a98-97437eecba09-lib-modules\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.638611 kubelet[1679]: I0702 07:48:51.638413 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f5ef33d-8fb0-44db-9a98-97437eecba09-cilium-cgroup\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.638611 kubelet[1679]: I0702 07:48:51.638468 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f5ef33d-8fb0-44db-9a98-97437eecba09-cilium-run\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.638611 kubelet[1679]: I0702 07:48:51.638543 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f5ef33d-8fb0-44db-9a98-97437eecba09-hostproc\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.639097 kubelet[1679]: I0702 07:48:51.638587 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f5ef33d-8fb0-44db-9a98-97437eecba09-cni-path\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.639097 kubelet[1679]: I0702 07:48:51.638746 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9f5ef33d-8fb0-44db-9a98-97437eecba09-cilium-ipsec-secrets\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.639097 kubelet[1679]: I0702 07:48:51.638806 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hbkv\" (UniqueName: \"kubernetes.io/projected/9f5ef33d-8fb0-44db-9a98-97437eecba09-kube-api-access-5hbkv\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.639097 kubelet[1679]: I0702 07:48:51.638843 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f5ef33d-8fb0-44db-9a98-97437eecba09-etc-cni-netd\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.639097 kubelet[1679]: I0702 07:48:51.638898 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f5ef33d-8fb0-44db-9a98-97437eecba09-clustermesh-secrets\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.639375 kubelet[1679]: I0702 07:48:51.638953 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f5ef33d-8fb0-44db-9a98-97437eecba09-host-proc-sys-kernel\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.639375 kubelet[1679]: I0702 07:48:51.638991 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f5ef33d-8fb0-44db-9a98-97437eecba09-bpf-maps\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.639375 kubelet[1679]: I0702 07:48:51.639044 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f5ef33d-8fb0-44db-9a98-97437eecba09-hubble-tls\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.639375 kubelet[1679]: I0702 07:48:51.639083 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f5ef33d-8fb0-44db-9a98-97437eecba09-host-proc-sys-net\") pod \"cilium-t8zgj\" (UID: \"9f5ef33d-8fb0-44db-9a98-97437eecba09\") " pod="kube-system/cilium-t8zgj" Jul 2 07:48:51.654709 systemd[1]: run-containerd-runc-k8s.io-226362f67ccee32262d97c10d4805b5915c7cc23f475fe6327ec536a10212df2-runc.FhUFGI.mount: Deactivated successfully. Jul 2 07:48:51.702599 env[1340]: time="2024-07-02T07:48:51.700772961Z" level=info msg="StartContainer for \"226362f67ccee32262d97c10d4805b5915c7cc23f475fe6327ec536a10212df2\" returns successfully" Jul 2 07:48:51.840288 env[1340]: time="2024-07-02T07:48:51.840134881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8zgj,Uid:9f5ef33d-8fb0-44db-9a98-97437eecba09,Namespace:kube-system,Attempt:0,}" Jul 2 07:48:51.871936 env[1340]: time="2024-07-02T07:48:51.871847475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:48:51.872139 env[1340]: time="2024-07-02T07:48:51.871961862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:48:51.872139 env[1340]: time="2024-07-02T07:48:51.872001141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:48:51.876540 env[1340]: time="2024-07-02T07:48:51.872308943Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9 pid=3418 runtime=io.containerd.runc.v2 Jul 2 07:48:51.945687 env[1340]: time="2024-07-02T07:48:51.945615242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8zgj,Uid:9f5ef33d-8fb0-44db-9a98-97437eecba09,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9\"" Jul 2 07:48:51.949192 env[1340]: time="2024-07-02T07:48:51.949139073Z" level=info msg="CreateContainer within sandbox \"5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:48:51.964730 env[1340]: time="2024-07-02T07:48:51.964635976Z" level=info msg="CreateContainer within sandbox \"5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"16fb86844a6858d04fa8fb73f1d9b18b2e3c6dd349c32c748e15c5a3901ed6e0\"" Jul 2 07:48:51.965441 env[1340]: time="2024-07-02T07:48:51.965401463Z" level=info msg="StartContainer for \"16fb86844a6858d04fa8fb73f1d9b18b2e3c6dd349c32c748e15c5a3901ed6e0\"" Jul 2 07:48:52.028390 env[1340]: time="2024-07-02T07:48:52.028330036Z" level=info msg="StartContainer for \"16fb86844a6858d04fa8fb73f1d9b18b2e3c6dd349c32c748e15c5a3901ed6e0\" returns successfully" Jul 2 07:48:52.100729 kubelet[1679]: E0702 07:48:52.100572 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:52.237134 env[1340]: time="2024-07-02T07:48:52.237066621Z" level=info msg="shim disconnected" id=16fb86844a6858d04fa8fb73f1d9b18b2e3c6dd349c32c748e15c5a3901ed6e0 Jul 2 07:48:52.237134 env[1340]: time="2024-07-02T07:48:52.237138540Z" level=warning msg="cleaning up after shim disconnected" id=16fb86844a6858d04fa8fb73f1d9b18b2e3c6dd349c32c748e15c5a3901ed6e0 namespace=k8s.io Jul 2 07:48:52.237134 env[1340]: time="2024-07-02T07:48:52.237153582Z" level=info msg="cleaning up dead shim" Jul 2 07:48:52.249061 env[1340]: time="2024-07-02T07:48:52.248991360Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3501 runtime=io.containerd.runc.v2\n" Jul 2 07:48:52.272973 kubelet[1679]: I0702 07:48:52.272934 1679 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="27a2092f-ca1e-4770-b5e2-49ba7a5f532b" path="/var/lib/kubelet/pods/27a2092f-ca1e-4770-b5e2-49ba7a5f532b/volumes" Jul 2 07:48:52.513032 kubelet[1679]: I0702 07:48:52.511339 1679 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-wwrrj" podStartSLOduration=1.8187666710000001 podCreationTimestamp="2024-07-02 07:48:49 +0000 UTC" firstStartedPulling="2024-07-02 07:48:49.895292908 +0000 UTC m=+60.302886597" lastFinishedPulling="2024-07-02 07:48:51.58779393 +0000 UTC m=+61.995387619" observedRunningTime="2024-07-02 07:48:52.510638398 +0000 UTC m=+62.918232094" watchObservedRunningTime="2024-07-02 07:48:52.511267693 +0000 UTC m=+62.918861391" Jul 2 07:48:52.520302 env[1340]: time="2024-07-02T07:48:52.520224735Z" level=info msg="CreateContainer within sandbox \"5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:48:52.535533 env[1340]: time="2024-07-02T07:48:52.535444212Z" level=info msg="CreateContainer within sandbox \"5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6bc01297473a210bd8b8a5493419af389ea2c60e9f3dbea3cdf856ba830af8be\"" Jul 2 07:48:52.536384 env[1340]: time="2024-07-02T07:48:52.536342706Z" level=info msg="StartContainer for \"6bc01297473a210bd8b8a5493419af389ea2c60e9f3dbea3cdf856ba830af8be\"" Jul 2 07:48:52.606720 env[1340]: time="2024-07-02T07:48:52.606661136Z" level=info msg="StartContainer for \"6bc01297473a210bd8b8a5493419af389ea2c60e9f3dbea3cdf856ba830af8be\" returns successfully" Jul 2 07:48:52.653237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bc01297473a210bd8b8a5493419af389ea2c60e9f3dbea3cdf856ba830af8be-rootfs.mount: Deactivated successfully. Jul 2 07:48:52.656737 env[1340]: time="2024-07-02T07:48:52.656683126Z" level=info msg="shim disconnected" id=6bc01297473a210bd8b8a5493419af389ea2c60e9f3dbea3cdf856ba830af8be Jul 2 07:48:52.657047 env[1340]: time="2024-07-02T07:48:52.657017835Z" level=warning msg="cleaning up after shim disconnected" id=6bc01297473a210bd8b8a5493419af389ea2c60e9f3dbea3cdf856ba830af8be namespace=k8s.io Jul 2 07:48:52.657173 env[1340]: time="2024-07-02T07:48:52.657150496Z" level=info msg="cleaning up dead shim" Jul 2 07:48:52.669104 env[1340]: time="2024-07-02T07:48:52.669044833Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3564 runtime=io.containerd.runc.v2\n" Jul 2 07:48:52.932173 kubelet[1679]: I0702 07:48:52.932117 1679 setters.go:552] "Node became not ready" node="10.128.0.9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T07:48:52Z","lastTransitionTime":"2024-07-02T07:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 07:48:53.101089 kubelet[1679]: E0702 07:48:53.101039 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:53.527099 env[1340]: time="2024-07-02T07:48:53.527034380Z" level=info msg="CreateContainer within sandbox \"5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:48:53.558888 env[1340]: time="2024-07-02T07:48:53.558818377Z" level=info msg="CreateContainer within sandbox \"5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bff6da544f77caba53afbba434269ba469130941766d9333c28b672b0db16f0c\"" Jul 2 07:48:53.559819 env[1340]: time="2024-07-02T07:48:53.559778361Z" level=info msg="StartContainer for \"bff6da544f77caba53afbba434269ba469130941766d9333c28b672b0db16f0c\"" Jul 2 07:48:53.640865 env[1340]: time="2024-07-02T07:48:53.640811642Z" level=info msg="StartContainer for \"bff6da544f77caba53afbba434269ba469130941766d9333c28b672b0db16f0c\" returns successfully" Jul 2 07:48:53.674162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bff6da544f77caba53afbba434269ba469130941766d9333c28b672b0db16f0c-rootfs.mount: Deactivated successfully. Jul 2 07:48:53.675744 env[1340]: time="2024-07-02T07:48:53.675688697Z" level=info msg="shim disconnected" id=bff6da544f77caba53afbba434269ba469130941766d9333c28b672b0db16f0c Jul 2 07:48:53.676101 env[1340]: time="2024-07-02T07:48:53.676066212Z" level=warning msg="cleaning up after shim disconnected" id=bff6da544f77caba53afbba434269ba469130941766d9333c28b672b0db16f0c namespace=k8s.io Jul 2 07:48:53.676272 env[1340]: time="2024-07-02T07:48:53.676248797Z" level=info msg="cleaning up dead shim" Jul 2 07:48:53.688966 env[1340]: time="2024-07-02T07:48:53.688901452Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3623 runtime=io.containerd.runc.v2\n" Jul 2 07:48:54.101701 kubelet[1679]: E0702 07:48:54.101624 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:54.532319 env[1340]: time="2024-07-02T07:48:54.532219041Z" level=info msg="CreateContainer within sandbox \"5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:48:54.549615 env[1340]: time="2024-07-02T07:48:54.549481851Z" level=info msg="CreateContainer within sandbox \"5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd4d45354589861f7ead93d7def035970bb8415a50646d850839da68544ed349\"" Jul 2 07:48:54.556306 env[1340]: time="2024-07-02T07:48:54.556221615Z" level=info msg="StartContainer for \"dd4d45354589861f7ead93d7def035970bb8415a50646d850839da68544ed349\"" Jul 2 07:48:54.639479 env[1340]: time="2024-07-02T07:48:54.639413762Z" level=info msg="StartContainer for \"dd4d45354589861f7ead93d7def035970bb8415a50646d850839da68544ed349\" returns successfully" Jul 2 07:48:54.663305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd4d45354589861f7ead93d7def035970bb8415a50646d850839da68544ed349-rootfs.mount: Deactivated successfully. Jul 2 07:48:54.667034 env[1340]: time="2024-07-02T07:48:54.666979767Z" level=info msg="shim disconnected" id=dd4d45354589861f7ead93d7def035970bb8415a50646d850839da68544ed349 Jul 2 07:48:54.667347 env[1340]: time="2024-07-02T07:48:54.667319602Z" level=warning msg="cleaning up after shim disconnected" id=dd4d45354589861f7ead93d7def035970bb8415a50646d850839da68544ed349 namespace=k8s.io Jul 2 07:48:54.667480 env[1340]: time="2024-07-02T07:48:54.667458030Z" level=info msg="cleaning up dead shim" Jul 2 07:48:54.680099 env[1340]: time="2024-07-02T07:48:54.680042457Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:48:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3680 runtime=io.containerd.runc.v2\n" Jul 2 07:48:55.102566 kubelet[1679]: E0702 07:48:55.102494 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:55.198041 kubelet[1679]: E0702 07:48:55.197992 1679 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:48:55.538217 env[1340]: time="2024-07-02T07:48:55.538088291Z" level=info msg="CreateContainer within sandbox \"5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:48:55.563455 env[1340]: time="2024-07-02T07:48:55.563399187Z" level=info msg="CreateContainer within sandbox \"5ad9e840e4ac001e680b63bf9a7c13a11da5f92a8b574bcf01cb51465dc389d9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ae0e5ef75b8a6f3a72578528f1931c6995500d50aa0a59438b8f665d90d6789\"" Jul 2 07:48:55.564309 env[1340]: time="2024-07-02T07:48:55.564265652Z" level=info msg="StartContainer for \"3ae0e5ef75b8a6f3a72578528f1931c6995500d50aa0a59438b8f665d90d6789\"" Jul 2 07:48:55.662659 env[1340]: time="2024-07-02T07:48:55.661783618Z" level=info msg="StartContainer for \"3ae0e5ef75b8a6f3a72578528f1931c6995500d50aa0a59438b8f665d90d6789\" returns successfully" Jul 2 07:48:55.704429 systemd[1]: run-containerd-runc-k8s.io-3ae0e5ef75b8a6f3a72578528f1931c6995500d50aa0a59438b8f665d90d6789-runc.xnuFKJ.mount: Deactivated successfully. Jul 2 07:48:56.097540 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:48:56.103152 kubelet[1679]: E0702 07:48:56.103116 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:56.557524 kubelet[1679]: I0702 07:48:56.557456 1679 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-t8zgj" podStartSLOduration=5.557408387 podCreationTimestamp="2024-07-02 07:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:48:56.557051505 +0000 UTC m=+66.964645200" watchObservedRunningTime="2024-07-02 07:48:56.557408387 +0000 UTC m=+66.965002084" Jul 2 07:48:56.917764 systemd[1]: run-containerd-runc-k8s.io-3ae0e5ef75b8a6f3a72578528f1931c6995500d50aa0a59438b8f665d90d6789-runc.EfyTqV.mount: Deactivated successfully. Jul 2 07:48:57.104416 kubelet[1679]: E0702 07:48:57.104359 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:58.104683 kubelet[1679]: E0702 07:48:58.104637 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:58.987793 systemd-networkd[1087]: lxc_health: Link UP Jul 2 07:48:59.002305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:48:59.002971 systemd-networkd[1087]: lxc_health: Gained carrier Jul 2 07:48:59.106203 kubelet[1679]: E0702 07:48:59.106136 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:48:59.132633 systemd[1]: run-containerd-runc-k8s.io-3ae0e5ef75b8a6f3a72578528f1931c6995500d50aa0a59438b8f665d90d6789-runc.Kov21o.mount: Deactivated successfully. Jul 2 07:49:00.107350 kubelet[1679]: E0702 07:49:00.107304 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:00.223826 systemd-networkd[1087]: lxc_health: Gained IPv6LL Jul 2 07:49:01.108427 kubelet[1679]: E0702 07:49:01.108377 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:02.110119 kubelet[1679]: E0702 07:49:02.110063 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:03.111322 kubelet[1679]: E0702 07:49:03.111265 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:03.750568 systemd[1]: run-containerd-runc-k8s.io-3ae0e5ef75b8a6f3a72578528f1931c6995500d50aa0a59438b8f665d90d6789-runc.JiEswJ.mount: Deactivated successfully. Jul 2 07:49:04.112956 kubelet[1679]: E0702 07:49:04.112763 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:05.113548 kubelet[1679]: E0702 07:49:05.113473 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:06.114311 kubelet[1679]: E0702 07:49:06.114272 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:07.115200 kubelet[1679]: E0702 07:49:07.115133 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:08.115973 kubelet[1679]: E0702 07:49:08.115911 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:09.116757 kubelet[1679]: E0702 07:49:09.116679 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:10.052011 kubelet[1679]: E0702 07:49:10.051948 1679 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:49:10.117848 kubelet[1679]: E0702 07:49:10.117786 1679 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"