Jul 2 07:56:53.132288 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:56:53.132334 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:56:53.132351 kernel: BIOS-provided physical RAM map: Jul 2 07:56:53.132363 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jul 2 07:56:53.132375 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jul 2 07:56:53.132387 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jul 2 07:56:53.132405 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jul 2 07:56:53.132418 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jul 2 07:56:53.132431 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jul 2 07:56:53.132443 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jul 2 07:56:53.132455 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jul 2 07:56:53.132468 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jul 2 07:56:53.132481 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jul 2 07:56:53.132496 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jul 2 07:56:53.132518 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jul 2 07:56:53.132534 kernel: NX (Execute Disable) protection: active Jul 2 07:56:53.132549 kernel: efi: EFI v2.70 by EDK II Jul 2 07:56:53.132565 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd2d2018 Jul 2 07:56:53.132580 kernel: random: crng init done Jul 2 07:56:53.132594 kernel: SMBIOS 2.4 present. Jul 2 07:56:53.132608 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024 Jul 2 07:56:53.132632 kernel: Hypervisor detected: KVM Jul 2 07:56:53.132670 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:56:53.132686 kernel: kvm-clock: cpu 0, msr 201192001, primary cpu clock Jul 2 07:56:53.132701 kernel: kvm-clock: using sched offset of 13245291603 cycles Jul 2 07:56:53.132717 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:56:53.132733 kernel: tsc: Detected 2299.998 MHz processor Jul 2 07:56:53.132748 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:56:53.132763 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:56:53.132778 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jul 2 07:56:53.132794 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:56:53.132808 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jul 2 07:56:53.132829 kernel: Using GB pages for direct mapping Jul 2 07:56:53.132843 kernel: Secure boot disabled Jul 2 07:56:53.132858 kernel: ACPI: Early table checksum verification disabled Jul 2 07:56:53.132872 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jul 2 07:56:53.132887 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jul 2 07:56:53.132902 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jul 2 07:56:53.132917 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jul 2 07:56:53.132932 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jul 2 07:56:53.132959 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Jul 2 07:56:53.132974 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jul 2 07:56:53.132991 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jul 2 07:56:53.133007 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jul 2 07:56:53.133023 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jul 2 07:56:53.133039 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jul 2 07:56:53.133060 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jul 2 07:56:53.133076 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jul 2 07:56:53.133092 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jul 2 07:56:53.133108 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jul 2 07:56:53.133124 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jul 2 07:56:53.133140 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jul 2 07:56:53.133156 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jul 2 07:56:53.133171 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jul 2 07:56:53.133187 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jul 2 07:56:53.133208 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 07:56:53.133223 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 07:56:53.133239 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 07:56:53.133256 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jul 2 07:56:53.133272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jul 2 07:56:53.133289 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jul 2 07:56:53.133306 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jul 2 07:56:53.133322 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jul 2 07:56:53.133338 kernel: Zone ranges: Jul 2 07:56:53.133359 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:56:53.133375 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 07:56:53.133390 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jul 2 07:56:53.133407 kernel: Movable zone start for each node Jul 2 07:56:53.133423 kernel: Early memory node ranges Jul 2 07:56:53.133439 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jul 2 07:56:53.133455 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jul 2 07:56:53.133471 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jul 2 07:56:53.133487 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jul 2 07:56:53.133508 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jul 2 07:56:53.133524 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jul 2 07:56:53.133540 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:56:53.133556 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jul 2 07:56:53.133572 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jul 2 07:56:53.133588 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 2 07:56:53.133605 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jul 2 07:56:53.133629 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 07:56:53.133645 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:56:53.133679 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:56:53.133695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:56:53.133711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:56:53.133728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:56:53.133744 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:56:53.133759 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:56:53.133776 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 07:56:53.133792 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 2 07:56:53.133809 kernel: Booting paravirtualized kernel on KVM Jul 2 07:56:53.133830 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:56:53.133846 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 07:56:53.133862 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 07:56:53.133878 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 07:56:53.133894 kernel: pcpu-alloc: [0] 0 1 Jul 2 07:56:53.133910 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:56:53.133926 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:56:53.133943 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jul 2 07:56:53.133959 kernel: Policy zone: Normal Jul 2 07:56:53.133983 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:56:53.133999 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:56:53.134015 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 07:56:53.134031 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:56:53.134047 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:56:53.134064 kernel: Memory: 7516812K/7860584K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 343512K reserved, 0K cma-reserved) Jul 2 07:56:53.134081 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 07:56:53.134097 kernel: Kernel/User page tables isolation: enabled Jul 2 07:56:53.134117 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:56:53.134133 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:56:53.134149 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:56:53.134166 kernel: rcu: RCU event tracing is enabled. Jul 2 07:56:53.134183 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 07:56:53.134199 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:56:53.134215 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:56:53.134232 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:56:53.134246 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 07:56:53.134265 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 07:56:53.134294 kernel: Console: colour dummy device 80x25 Jul 2 07:56:53.134309 kernel: printk: console [ttyS0] enabled Jul 2 07:56:53.134329 kernel: ACPI: Core revision 20210730 Jul 2 07:56:53.134345 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:56:53.134362 kernel: x2apic enabled Jul 2 07:56:53.134378 kernel: Switched APIC routing to physical x2apic. Jul 2 07:56:53.134395 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jul 2 07:56:53.134411 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 2 07:56:53.134429 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jul 2 07:56:53.134449 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jul 2 07:56:53.134466 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jul 2 07:56:53.134484 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:56:53.134502 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 07:56:53.134520 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 07:56:53.134537 kernel: Spectre V2 : Mitigation: IBRS Jul 2 07:56:53.134559 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:56:53.134577 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:56:53.134595 kernel: RETBleed: Mitigation: IBRS Jul 2 07:56:53.134623 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:56:53.134641 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Jul 2 07:56:53.148142 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:56:53.148176 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 07:56:53.148196 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:56:53.148214 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:56:53.148241 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:56:53.148259 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:56:53.148277 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:56:53.148296 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:56:53.148314 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:56:53.148332 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:56:53.148349 kernel: LSM: Security Framework initializing Jul 2 07:56:53.148367 kernel: SELinux: Initializing. Jul 2 07:56:53.148385 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:56:53.148406 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:56:53.148424 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jul 2 07:56:53.148449 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jul 2 07:56:53.148470 kernel: signal: max sigframe size: 1776 Jul 2 07:56:53.148489 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:56:53.148507 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 07:56:53.148525 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:56:53.148543 kernel: x86: Booting SMP configuration: Jul 2 07:56:53.148561 kernel: .... node #0, CPUs: #1 Jul 2 07:56:53.148583 kernel: kvm-clock: cpu 1, msr 201192041, secondary cpu clock Jul 2 07:56:53.148602 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 07:56:53.148631 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 07:56:53.148649 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 07:56:53.148844 kernel: smpboot: Max logical packages: 1 Jul 2 07:56:53.148860 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 2 07:56:53.148877 kernel: devtmpfs: initialized Jul 2 07:56:53.148893 kernel: x86/mm: Memory block size: 128MB Jul 2 07:56:53.148909 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jul 2 07:56:53.148931 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:56:53.148946 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 07:56:53.148962 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:56:53.148978 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:56:53.148995 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:56:53.149014 kernel: audit: type=2000 audit(1719907012.251:1): state=initialized audit_enabled=0 res=1 Jul 2 07:56:53.149031 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:56:53.149049 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:56:53.149067 kernel: cpuidle: using governor menu Jul 2 07:56:53.149089 kernel: ACPI: bus type PCI registered Jul 2 07:56:53.149106 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:56:53.149122 kernel: dca service started, version 1.12.1 Jul 2 07:56:53.149138 kernel: PCI: Using configuration type 1 for base access Jul 2 07:56:53.149155 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:56:53.149171 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:56:53.149187 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:56:53.149203 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:56:53.149220 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:56:53.149241 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:56:53.149259 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:56:53.149277 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:56:53.149294 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:56:53.149312 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:56:53.149329 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 07:56:53.149345 kernel: ACPI: Interpreter enabled Jul 2 07:56:53.149361 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:56:53.149377 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:56:53.149397 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:56:53.149413 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 07:56:53.149430 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:56:53.149737 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:56:53.149922 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 07:56:53.149945 kernel: PCI host bridge to bus 0000:00 Jul 2 07:56:53.150115 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:56:53.150274 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:56:53.150419 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:56:53.150562 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jul 2 07:56:53.150731 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:56:53.150919 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:56:53.151093 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jul 2 07:56:53.151292 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:56:53.151480 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 07:56:53.158903 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jul 2 07:56:53.159139 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jul 2 07:56:53.159313 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jul 2 07:56:53.159506 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:56:53.159702 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jul 2 07:56:53.159895 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jul 2 07:56:53.160075 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:56:53.160246 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:56:53.160411 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jul 2 07:56:53.160434 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:56:53.160453 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:56:53.160472 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:56:53.160495 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:56:53.160512 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:56:53.160531 kernel: iommu: Default domain type: Translated Jul 2 07:56:53.160549 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:56:53.160568 kernel: vgaarb: loaded Jul 2 07:56:53.160585 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:56:53.160617 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:56:53.160636 kernel: PTP clock support registered Jul 2 07:56:53.160664 kernel: Registered efivars operations Jul 2 07:56:53.168915 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:56:53.168938 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:56:53.168956 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jul 2 07:56:53.168974 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jul 2 07:56:53.168990 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jul 2 07:56:53.169015 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jul 2 07:56:53.169033 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:56:53.169049 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:56:53.169067 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:56:53.169096 kernel: pnp: PnP ACPI init Jul 2 07:56:53.169115 kernel: pnp: PnP ACPI: found 7 devices Jul 2 07:56:53.169131 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:56:53.169147 kernel: NET: Registered PF_INET protocol family Jul 2 07:56:53.169163 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 07:56:53.169179 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 07:56:53.169196 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:56:53.169211 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:56:53.169227 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 07:56:53.169248 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 07:56:53.169272 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:56:53.169291 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:56:53.169308 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:56:53.169325 kernel: NET: Registered PF_XDP protocol family Jul 2 07:56:53.169569 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:56:53.177350 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:56:53.177540 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:56:53.180095 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jul 2 07:56:53.180302 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:56:53.180330 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:56:53.180347 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 07:56:53.180364 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Jul 2 07:56:53.180380 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 07:56:53.180396 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 2 07:56:53.180413 kernel: clocksource: Switched to clocksource tsc Jul 2 07:56:53.180438 kernel: Initialise system trusted keyrings Jul 2 07:56:53.180455 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 07:56:53.180469 kernel: Key type asymmetric registered Jul 2 07:56:53.180485 kernel: Asymmetric key parser 'x509' registered Jul 2 07:56:53.180501 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:56:53.180517 kernel: io scheduler mq-deadline registered Jul 2 07:56:53.180534 kernel: io scheduler kyber registered Jul 2 07:56:53.180549 kernel: io scheduler bfq registered Jul 2 07:56:53.180566 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:56:53.180587 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:56:53.181888 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jul 2 07:56:53.181932 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:56:53.182112 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jul 2 07:56:53.182136 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:56:53.182312 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jul 2 07:56:53.182336 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:56:53.182355 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:56:53.182372 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 07:56:53.182395 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jul 2 07:56:53.182413 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jul 2 07:56:53.182590 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jul 2 07:56:53.182627 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:56:53.182646 kernel: i8042: Warning: Keylock active Jul 2 07:56:53.184010 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:56:53.184038 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:56:53.184240 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 07:56:53.185189 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 07:56:53.185360 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T07:56:52 UTC (1719907012) Jul 2 07:56:53.185511 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 07:56:53.185534 kernel: intel_pstate: CPU model not supported Jul 2 07:56:53.185552 kernel: pstore: Registered efi as persistent store backend Jul 2 07:56:53.185569 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:56:53.185586 kernel: Segment Routing with IPv6 Jul 2 07:56:53.185603 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:56:53.185642 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:56:53.188046 kernel: Key type dns_resolver registered Jul 2 07:56:53.188077 kernel: IPI shorthand broadcast: enabled Jul 2 07:56:53.188096 kernel: sched_clock: Marking stable (755766086, 149651161)->(1003274518, -97857271) Jul 2 07:56:53.188113 kernel: registered taskstats version 1 Jul 2 07:56:53.188130 kernel: Loading compiled-in X.509 certificates Jul 2 07:56:53.188147 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:56:53.188165 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:56:53.188181 kernel: Key type .fscrypt registered Jul 2 07:56:53.188204 kernel: Key type fscrypt-provisioning registered Jul 2 07:56:53.188221 kernel: pstore: Using crash dump compression: deflate Jul 2 07:56:53.188238 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:56:53.188255 kernel: ima: No architecture policies found Jul 2 07:56:53.188272 kernel: clk: Disabling unused clocks Jul 2 07:56:53.188288 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:56:53.188305 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:56:53.188322 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:56:53.188343 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:56:53.188360 kernel: Run /init as init process Jul 2 07:56:53.188377 kernel: with arguments: Jul 2 07:56:53.188394 kernel: /init Jul 2 07:56:53.188411 kernel: with environment: Jul 2 07:56:53.188427 kernel: HOME=/ Jul 2 07:56:53.188444 kernel: TERM=linux Jul 2 07:56:53.188462 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:56:53.188484 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:56:53.188510 systemd[1]: Detected virtualization kvm. Jul 2 07:56:53.188529 systemd[1]: Detected architecture x86-64. Jul 2 07:56:53.188546 systemd[1]: Running in initrd. Jul 2 07:56:53.188564 systemd[1]: No hostname configured, using default hostname. Jul 2 07:56:53.188583 systemd[1]: Hostname set to . Jul 2 07:56:53.188602 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:56:53.188628 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:56:53.188650 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:56:53.190727 systemd[1]: Reached target cryptsetup.target. Jul 2 07:56:53.190747 systemd[1]: Reached target paths.target. Jul 2 07:56:53.190765 systemd[1]: Reached target slices.target. Jul 2 07:56:53.190784 systemd[1]: Reached target swap.target. Jul 2 07:56:53.190801 systemd[1]: Reached target timers.target. Jul 2 07:56:53.190820 systemd[1]: Listening on iscsid.socket. Jul 2 07:56:53.190838 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:56:53.190862 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:56:53.190879 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:56:53.190897 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:56:53.190913 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:56:53.190930 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:56:53.190946 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:56:53.190963 systemd[1]: Reached target sockets.target. Jul 2 07:56:53.190980 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:56:53.190998 systemd[1]: Finished network-cleanup.service. Jul 2 07:56:53.191020 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:56:53.191038 systemd[1]: Starting systemd-journald.service... Jul 2 07:56:53.191075 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:56:53.191097 systemd[1]: Starting systemd-resolved.service... Jul 2 07:56:53.191116 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:56:53.191134 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:56:53.191157 kernel: audit: type=1130 audit(1719907013.150:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.191175 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:56:53.191193 kernel: audit: type=1130 audit(1719907013.159:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.191211 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:56:53.191230 kernel: audit: type=1130 audit(1719907013.168:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.191248 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:56:53.191267 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:56:53.191292 systemd-journald[189]: Journal started Jul 2 07:56:53.191393 systemd-journald[189]: Runtime Journal (/run/log/journal/986b721d9fd27ab4107eb79666d25dc1) is 8.0M, max 148.8M, 140.8M free. Jul 2 07:56:53.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.167025 systemd-modules-load[190]: Inserted module 'overlay' Jul 2 07:56:53.190534 systemd-resolved[191]: Positive Trust Anchors: Jul 2 07:56:53.190546 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:56:53.190605 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:56:53.200795 systemd[1]: Started systemd-journald.service. Jul 2 07:56:53.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.203304 systemd-resolved[191]: Defaulting to hostname 'linux'. Jul 2 07:56:53.206717 kernel: audit: type=1130 audit(1719907013.200:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.216701 kernel: audit: type=1130 audit(1719907013.205:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.206778 systemd[1]: Started systemd-resolved.service. Jul 2 07:56:53.207082 systemd[1]: Reached target nss-lookup.target. Jul 2 07:56:53.220536 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:56:53.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.224716 kernel: audit: type=1130 audit(1719907013.219:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.230026 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:56:53.241821 kernel: audit: type=1130 audit(1719907013.232:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.239526 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:56:53.258070 dracut-cmdline[206]: dracut-dracut-053 Jul 2 07:56:53.265799 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:56:53.265845 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:56:53.278039 systemd-modules-load[190]: Inserted module 'br_netfilter' Jul 2 07:56:53.281797 kernel: Bridge firewalling registered Jul 2 07:56:53.308678 kernel: SCSI subsystem initialized Jul 2 07:56:53.327710 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:56:53.327790 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:56:53.329684 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:56:53.334427 systemd-modules-load[190]: Inserted module 'dm_multipath' Jul 2 07:56:53.335966 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:56:53.354812 kernel: audit: type=1130 audit(1719907013.342:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.344982 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:56:53.362071 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:56:53.372809 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:56:53.372854 kernel: audit: type=1130 audit(1719907013.364:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.390707 kernel: iscsi: registered transport (tcp) Jul 2 07:56:53.417768 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:56:53.417864 kernel: QLogic iSCSI HBA Driver Jul 2 07:56:53.464622 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:56:53.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.470328 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:56:53.528710 kernel: raid6: avx2x4 gen() 18197 MB/s Jul 2 07:56:53.545742 kernel: raid6: avx2x4 xor() 7744 MB/s Jul 2 07:56:53.562706 kernel: raid6: avx2x2 gen() 17676 MB/s Jul 2 07:56:53.579705 kernel: raid6: avx2x2 xor() 18446 MB/s Jul 2 07:56:53.596704 kernel: raid6: avx2x1 gen() 13943 MB/s Jul 2 07:56:53.613699 kernel: raid6: avx2x1 xor() 16012 MB/s Jul 2 07:56:53.630699 kernel: raid6: sse2x4 gen() 11050 MB/s Jul 2 07:56:53.647711 kernel: raid6: sse2x4 xor() 6548 MB/s Jul 2 07:56:53.664731 kernel: raid6: sse2x2 gen() 11852 MB/s Jul 2 07:56:53.681723 kernel: raid6: sse2x2 xor() 7319 MB/s Jul 2 07:56:53.698708 kernel: raid6: sse2x1 gen() 10289 MB/s Jul 2 07:56:53.716386 kernel: raid6: sse2x1 xor() 5102 MB/s Jul 2 07:56:53.716471 kernel: raid6: using algorithm avx2x4 gen() 18197 MB/s Jul 2 07:56:53.716508 kernel: raid6: .... xor() 7744 MB/s, rmw enabled Jul 2 07:56:53.717507 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:56:53.732690 kernel: xor: automatically using best checksumming function avx Jul 2 07:56:53.840692 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:56:53.852618 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:56:53.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.852000 audit: BPF prog-id=7 op=LOAD Jul 2 07:56:53.852000 audit: BPF prog-id=8 op=LOAD Jul 2 07:56:53.854873 systemd[1]: Starting systemd-udevd.service... Jul 2 07:56:53.872218 systemd-udevd[388]: Using default interface naming scheme 'v252'. Jul 2 07:56:53.879434 systemd[1]: Started systemd-udevd.service. Jul 2 07:56:53.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.900070 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:56:53.917914 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jul 2 07:56:53.956007 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:56:53.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:53.957245 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:56:54.025539 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:56:54.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:54.110691 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:56:54.225242 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:56:54.225316 kernel: AES CTR mode by8 optimization enabled Jul 2 07:56:54.232686 kernel: scsi host0: Virtio SCSI HBA Jul 2 07:56:54.256721 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jul 2 07:56:54.329279 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jul 2 07:56:54.329671 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jul 2 07:56:54.329896 kernel: sd 0:0:1:0: [sda] Write Protect is off Jul 2 07:56:54.334694 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jul 2 07:56:54.335053 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 07:56:54.363988 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:56:54.364073 kernel: GPT:17805311 != 25165823 Jul 2 07:56:54.364096 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:56:54.370090 kernel: GPT:17805311 != 25165823 Jul 2 07:56:54.373791 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:56:54.379040 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:56:54.386133 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jul 2 07:56:54.446389 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:56:54.463834 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (445) Jul 2 07:56:54.478606 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:56:54.487186 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:56:54.496082 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:56:54.523228 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:56:54.536273 systemd[1]: Starting disk-uuid.service... Jul 2 07:56:54.561010 disk-uuid[513]: Primary Header is updated. Jul 2 07:56:54.561010 disk-uuid[513]: Secondary Entries is updated. Jul 2 07:56:54.561010 disk-uuid[513]: Secondary Header is updated. Jul 2 07:56:54.587847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:56:54.594689 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:56:54.617693 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:56:55.612677 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:56:55.613121 disk-uuid[514]: The operation has completed successfully. Jul 2 07:56:55.688436 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:56:55.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:55.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:55.688604 systemd[1]: Finished disk-uuid.service. Jul 2 07:56:55.700481 systemd[1]: Starting verity-setup.service... Jul 2 07:56:55.729761 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 07:56:55.818983 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:56:55.828184 systemd[1]: Finished verity-setup.service. Jul 2 07:56:55.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:55.844252 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:56:55.947682 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:56:55.948440 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:56:55.948866 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:56:55.949785 systemd[1]: Starting ignition-setup.service... Jul 2 07:56:56.008871 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:56:56.008922 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:56:56.008945 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:56:56.008966 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:56:56.003525 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:56:56.025248 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:56:56.040070 systemd[1]: Finished ignition-setup.service. Jul 2 07:56:56.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.042006 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:56:56.125963 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:56:56.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.134000 audit: BPF prog-id=9 op=LOAD Jul 2 07:56:56.138379 systemd[1]: Starting systemd-networkd.service... Jul 2 07:56:56.171462 systemd-networkd[689]: lo: Link UP Jul 2 07:56:56.171476 systemd-networkd[689]: lo: Gained carrier Jul 2 07:56:56.172456 systemd-networkd[689]: Enumeration completed Jul 2 07:56:56.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.172868 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:56:56.173068 systemd[1]: Started systemd-networkd.service. Jul 2 07:56:56.175450 systemd-networkd[689]: eth0: Link UP Jul 2 07:56:56.175458 systemd-networkd[689]: eth0: Gained carrier Jul 2 07:56:56.183819 systemd-networkd[689]: eth0: DHCPv4 address 10.128.0.79/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 2 07:56:56.194027 systemd[1]: Reached target network.target. Jul 2 07:56:56.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.210063 systemd[1]: Starting iscsiuio.service... Jul 2 07:56:56.287846 iscsid[699]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:56:56.287846 iscsid[699]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 07:56:56.287846 iscsid[699]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:56:56.287846 iscsid[699]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:56:56.287846 iscsid[699]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:56:56.287846 iscsid[699]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:56:56.287846 iscsid[699]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:56:56.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.240192 systemd[1]: Started iscsiuio.service. Jul 2 07:56:56.344149 ignition[607]: Ignition 2.14.0 Jul 2 07:56:56.267622 systemd[1]: Starting iscsid.service... Jul 2 07:56:56.344165 ignition[607]: Stage: fetch-offline Jul 2 07:56:56.280136 systemd[1]: Started iscsid.service. Jul 2 07:56:56.344374 ignition[607]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:56:56.296575 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:56:56.344424 ignition[607]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:56:56.355319 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:56:56.366130 ignition[607]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:56:56.377193 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:56:56.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.366372 ignition[607]: parsed url from cmdline: "" Jul 2 07:56:56.395084 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:56:56.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.366380 ignition[607]: no config URL provided Jul 2 07:56:56.416873 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:56:56.366390 ignition[607]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:56:56.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.434864 systemd[1]: Reached target remote-fs.target. Jul 2 07:56:56.366401 ignition[607]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:56:56.448946 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:56:56.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.366411 ignition[607]: failed to fetch config: resource requires networking Jul 2 07:56:56.469151 systemd[1]: Starting ignition-fetch.service... Jul 2 07:56:56.366822 ignition[607]: Ignition finished successfully Jul 2 07:56:56.501243 unknown[713]: fetched base config from "system" Jul 2 07:56:56.480387 ignition[713]: Ignition 2.14.0 Jul 2 07:56:56.501256 unknown[713]: fetched base config from "system" Jul 2 07:56:56.480396 ignition[713]: Stage: fetch Jul 2 07:56:56.501265 unknown[713]: fetched user config from "gcp" Jul 2 07:56:56.480541 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:56:56.515485 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:56:56.480578 ignition[713]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:56:56.525341 systemd[1]: Finished ignition-fetch.service. Jul 2 07:56:56.487865 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:56:56.548220 systemd[1]: Starting ignition-kargs.service... Jul 2 07:56:56.488081 ignition[713]: parsed url from cmdline: "" Jul 2 07:56:56.572321 systemd[1]: Finished ignition-kargs.service. Jul 2 07:56:56.488088 ignition[713]: no config URL provided Jul 2 07:56:56.588197 systemd[1]: Starting ignition-disks.service... Jul 2 07:56:56.488096 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:56:56.613017 systemd[1]: Finished ignition-disks.service. Jul 2 07:56:56.488109 ignition[713]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:56:56.629058 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:56:56.488149 ignition[713]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jul 2 07:56:56.644886 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:56:56.498668 ignition[713]: GET result: OK Jul 2 07:56:56.659835 systemd[1]: Reached target local-fs.target. Jul 2 07:56:56.498741 ignition[713]: parsing config with SHA512: 6095bc4e03ae2ca715b76f99dcfedb5ccd1d444f9ff6f1f18663e8b125a3bb3cd2b02dc36bc01019fdd9eed87c3c8f020c3290ea0bfed8ad5aaef36a8e7ea2cb Jul 2 07:56:56.673929 systemd[1]: Reached target sysinit.target. Jul 2 07:56:56.502508 ignition[713]: fetch: fetch complete Jul 2 07:56:56.686903 systemd[1]: Reached target basic.target. Jul 2 07:56:56.502519 ignition[713]: fetch: fetch passed Jul 2 07:56:56.701144 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:56:56.502618 ignition[713]: Ignition finished successfully Jul 2 07:56:56.562220 ignition[719]: Ignition 2.14.0 Jul 2 07:56:56.562231 ignition[719]: Stage: kargs Jul 2 07:56:56.562387 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:56:56.562421 ignition[719]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:56:56.569581 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:56:56.571172 ignition[719]: kargs: kargs passed Jul 2 07:56:56.571226 ignition[719]: Ignition finished successfully Jul 2 07:56:56.600632 ignition[725]: Ignition 2.14.0 Jul 2 07:56:56.600641 ignition[725]: Stage: disks Jul 2 07:56:56.600828 ignition[725]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:56:56.600862 ignition[725]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:56:56.610495 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:56:56.611968 ignition[725]: disks: disks passed Jul 2 07:56:56.612024 ignition[725]: Ignition finished successfully Jul 2 07:56:56.740674 systemd-fsck[733]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 07:56:56.939784 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:56:56.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:56.941104 systemd[1]: Mounting sysroot.mount... Jul 2 07:56:56.971703 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:56:56.978059 systemd[1]: Mounted sysroot.mount. Jul 2 07:56:56.978401 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:56:57.001916 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:56:57.014469 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:56:57.014532 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:56:57.014577 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:56:57.087782 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (739) Jul 2 07:56:57.087822 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:56:57.087845 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:56:57.031241 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:56:57.099310 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:56:57.055302 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:56:57.123841 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:56:57.117051 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:56:57.133050 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:56:57.151801 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:56:57.142576 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:56:57.177821 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:56:57.188805 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:56:57.225214 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:56:57.265977 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 2 07:56:57.266012 kernel: audit: type=1130 audit(1719907017.223:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:57.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:57.226888 systemd[1]: Starting ignition-mount.service... Jul 2 07:56:57.274001 systemd[1]: Starting sysroot-boot.service... Jul 2 07:56:57.278871 systemd-networkd[689]: eth0: Gained IPv6LL Jul 2 07:56:57.294146 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 07:56:57.294293 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 07:56:57.321383 systemd[1]: Finished sysroot-boot.service. Jul 2 07:56:57.328901 ignition[805]: INFO : Ignition 2.14.0 Jul 2 07:56:57.328901 ignition[805]: INFO : Stage: mount Jul 2 07:56:57.328901 ignition[805]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:56:57.328901 ignition[805]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:56:57.328901 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:56:57.328901 ignition[805]: INFO : mount: mount passed Jul 2 07:56:57.328901 ignition[805]: INFO : Ignition finished successfully Jul 2 07:56:57.511860 kernel: audit: type=1130 audit(1719907017.335:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:57.511927 kernel: audit: type=1130 audit(1719907017.371:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:57.511955 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (814) Jul 2 07:56:57.511980 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:56:57.512005 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:56:57.512030 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:56:57.512054 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:56:57.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:57.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:57.337348 systemd[1]: Finished ignition-mount.service. Jul 2 07:56:57.374217 systemd[1]: Starting ignition-files.service... Jul 2 07:56:57.404375 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:56:57.535911 ignition[833]: INFO : Ignition 2.14.0 Jul 2 07:56:57.535911 ignition[833]: INFO : Stage: files Jul 2 07:56:57.535911 ignition[833]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:56:57.535911 ignition[833]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:56:57.535911 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:56:57.535911 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:56:57.614866 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (837) Jul 2 07:56:57.481060 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:56:57.623851 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:56:57.623851 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:56:57.623851 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:56:57.623851 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:56:57.623851 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:56:57.623851 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Jul 2 07:56:57.623851 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:56:57.623851 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2099320507" Jul 2 07:56:57.623851 ignition[833]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2099320507": device or resource busy Jul 2 07:56:57.623851 ignition[833]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2099320507", trying btrfs: device or resource busy Jul 2 07:56:57.623851 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2099320507" Jul 2 07:56:57.623851 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2099320507" Jul 2 07:56:57.623851 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem2099320507" Jul 2 07:56:57.623851 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem2099320507" Jul 2 07:56:57.623851 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Jul 2 07:56:57.623851 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Jul 2 07:56:57.623851 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:56:57.544978 unknown[833]: wrote ssh authorized keys file for user: core Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3233210463" Jul 2 07:56:57.889840 ignition[833]: CRITICAL : files: createFilesystemsFiles: createFiles: op(7): op(8): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3233210463": device or resource busy Jul 2 07:56:57.889840 ignition[833]: ERROR : files: createFilesystemsFiles: createFiles: op(7): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3233210463", trying btrfs: device or resource busy Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3233210463" Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3233210463" Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [started] unmounting "/mnt/oem3233210463" Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [finished] unmounting "/mnt/oem3233210463" Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:56:57.889840 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:56:58.139848 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Jul 2 07:56:58.139848 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:56:58.139848 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3517073005" Jul 2 07:56:58.139848 ignition[833]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3517073005": device or resource busy Jul 2 07:56:58.139848 ignition[833]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3517073005", trying btrfs: device or resource busy Jul 2 07:56:58.139848 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3517073005" Jul 2 07:56:58.139848 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3517073005" Jul 2 07:56:58.139848 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem3517073005" Jul 2 07:56:58.139848 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem3517073005" Jul 2 07:56:58.139848 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Jul 2 07:56:58.139848 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Jul 2 07:56:58.139848 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(12): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:56:58.139848 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2757410689" Jul 2 07:56:58.139848 ignition[833]: CRITICAL : files: createFilesystemsFiles: createFiles: op(12): op(13): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2757410689": device or resource busy Jul 2 07:56:58.422861 kernel: audit: type=1130 audit(1719907018.359:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.342904 systemd[1]: Finished ignition-files.service. Jul 2 07:56:58.436987 ignition[833]: ERROR : files: createFilesystemsFiles: createFiles: op(12): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2757410689", trying btrfs: device or resource busy Jul 2 07:56:58.436987 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2757410689" Jul 2 07:56:58.436987 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2757410689" Jul 2 07:56:58.436987 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [started] unmounting "/mnt/oem2757410689" Jul 2 07:56:58.436987 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [finished] unmounting "/mnt/oem2757410689" Jul 2 07:56:58.436987 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Jul 2 07:56:58.436987 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:56:58.436987 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(16): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 07:56:58.436987 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(16): GET result: OK Jul 2 07:56:58.436987 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:56:58.436987 ignition[833]: INFO : files: op(17): [started] processing unit "oem-gce-enable-oslogin.service" Jul 2 07:56:58.436987 ignition[833]: INFO : files: op(17): [finished] processing unit "oem-gce-enable-oslogin.service" Jul 2 07:56:58.436987 ignition[833]: INFO : files: op(18): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:56:58.436987 ignition[833]: INFO : files: op(18): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:56:58.436987 ignition[833]: INFO : files: op(19): [started] processing unit "oem-gce.service" Jul 2 07:56:58.436987 ignition[833]: INFO : files: op(19): [finished] processing unit "oem-gce.service" Jul 2 07:56:58.436987 ignition[833]: INFO : files: op(1a): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Jul 2 07:56:58.892898 kernel: audit: type=1130 audit(1719907018.445:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.892952 kernel: audit: type=1130 audit(1719907018.497:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.892976 kernel: audit: type=1131 audit(1719907018.497:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.893012 kernel: audit: type=1130 audit(1719907018.615:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.893035 kernel: audit: type=1131 audit(1719907018.615:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.893052 kernel: audit: type=1130 audit(1719907018.798:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.370311 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:56:58.911857 ignition[833]: INFO : files: op(1a): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Jul 2 07:56:58.911857 ignition[833]: INFO : files: op(1b): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:56:58.911857 ignition[833]: INFO : files: op(1b): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:56:58.911857 ignition[833]: INFO : files: op(1c): [started] setting preset to enabled for "oem-gce.service" Jul 2 07:56:58.911857 ignition[833]: INFO : files: op(1c): [finished] setting preset to enabled for "oem-gce.service" Jul 2 07:56:58.911857 ignition[833]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:56:58.911857 ignition[833]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:56:58.911857 ignition[833]: INFO : files: files passed Jul 2 07:56:58.911857 ignition[833]: INFO : Ignition finished successfully Jul 2 07:56:58.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.048072 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:56:58.398021 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:56:58.399251 systemd[1]: Starting ignition-quench.service... Jul 2 07:56:58.430191 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:56:58.447338 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:56:58.447483 systemd[1]: Finished ignition-quench.service. Jul 2 07:56:58.499087 systemd[1]: Reached target ignition-complete.target. Jul 2 07:56:58.581092 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:56:58.616409 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:56:59.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.616527 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:56:58.617166 systemd[1]: Reached target initrd-fs.target. Jul 2 07:56:58.706014 systemd[1]: Reached target initrd.target. Jul 2 07:56:59.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.726268 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:56:59.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.727754 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:56:59.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.769207 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:56:59.297838 ignition[871]: INFO : Ignition 2.14.0 Jul 2 07:56:59.297838 ignition[871]: INFO : Stage: umount Jul 2 07:56:59.297838 ignition[871]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:56:59.297838 ignition[871]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:56:58.801415 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:56:59.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.364010 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:56:59.364010 ignition[871]: INFO : umount: umount passed Jul 2 07:56:59.364010 ignition[871]: INFO : Ignition finished successfully Jul 2 07:56:59.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.859477 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:56:59.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.879179 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:56:59.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.902216 systemd[1]: Stopped target timers.target. Jul 2 07:56:59.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.919126 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:56:59.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.919318 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:56:59.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.953361 systemd[1]: Stopped target initrd.target. Jul 2 07:56:59.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:58.968282 systemd[1]: Stopped target basic.target. Jul 2 07:56:59.004172 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:56:59.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.041170 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:56:59.063172 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:56:59.086176 systemd[1]: Stopped target remote-fs.target. Jul 2 07:56:59.111126 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:56:59.126154 systemd[1]: Stopped target sysinit.target. Jul 2 07:56:59.143145 systemd[1]: Stopped target local-fs.target. Jul 2 07:56:59.160244 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:56:59.176104 systemd[1]: Stopped target swap.target. Jul 2 07:56:59.192139 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:56:59.192338 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:56:59.201420 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:56:59.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.225134 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:56:59.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.225370 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:56:59.246286 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:56:59.246484 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:56:59.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.263245 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:56:59.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.263447 systemd[1]: Stopped ignition-files.service. Jul 2 07:56:59.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.739000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:56:59.283741 systemd[1]: Stopping ignition-mount.service... Jul 2 07:56:59.329980 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:56:59.330268 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:56:59.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.351778 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:56:59.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.378891 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:56:59.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.379186 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:56:59.394144 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:56:59.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.394427 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:56:59.413951 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:56:59.415152 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:56:59.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.415265 systemd[1]: Stopped ignition-mount.service. Jul 2 07:56:59.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.425535 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:56:59.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.425675 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:56:59.440603 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:56:59.440791 systemd[1]: Stopped ignition-disks.service. Jul 2 07:56:59.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.455950 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:56:59.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.456039 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:57:00.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:00.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:56:59.472976 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 07:56:59.473062 systemd[1]: Stopped ignition-fetch.service. Jul 2 07:56:59.487949 systemd[1]: Stopped target network.target. Jul 2 07:56:59.502852 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:56:59.502963 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:57:00.076431 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Jul 2 07:56:59.518996 systemd[1]: Stopped target paths.target. Jul 2 07:57:00.083886 iscsid[699]: iscsid shutting down. Jul 2 07:56:59.533933 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:56:59.537771 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:56:59.542991 systemd[1]: Stopped target slices.target. Jul 2 07:56:59.570960 systemd[1]: Stopped target sockets.target. Jul 2 07:56:59.592045 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:56:59.592095 systemd[1]: Closed iscsid.socket. Jul 2 07:56:59.612043 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:56:59.612099 systemd[1]: Closed iscsiuio.socket. Jul 2 07:56:59.619075 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:56:59.619151 systemd[1]: Stopped ignition-setup.service. Jul 2 07:56:59.646025 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:56:59.646125 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:56:59.662276 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:56:59.666768 systemd-networkd[689]: eth0: DHCPv6 lease lost Jul 2 07:56:59.678069 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:56:59.691399 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:56:59.691536 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:56:59.709697 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:56:59.709841 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:56:59.725595 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:56:59.725750 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:56:59.742122 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:56:59.742171 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:56:59.756961 systemd[1]: Stopping network-cleanup.service... Jul 2 07:56:59.769813 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:56:59.769937 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:56:59.787997 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:56:59.788083 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:56:59.803179 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:56:59.803251 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:56:59.818151 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:56:59.836418 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:56:59.837131 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:56:59.837285 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:56:59.853775 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:56:59.853859 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:56:59.869918 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:56:59.869985 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:56:59.884885 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:56:59.884982 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:56:59.901982 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:56:59.902069 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:56:59.918970 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:56:59.919057 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:56:59.935231 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:56:59.957920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:56:59.958055 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:56:59.974570 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:56:59.974723 systemd[1]: Stopped network-cleanup.service. Jul 2 07:56:59.989339 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:56:59.989455 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:57:00.005148 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:57:00.021036 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:57:00.044171 systemd[1]: Switching root. Jul 2 07:57:00.093063 systemd-journald[189]: Journal stopped Jul 2 07:57:05.067057 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:57:05.067186 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:57:05.067211 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:57:05.067234 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:57:05.067257 kernel: SELinux: policy capability open_perms=1 Jul 2 07:57:05.067277 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:57:05.067433 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:57:05.067458 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:57:05.067492 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:57:05.067518 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:57:05.067541 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:57:05.067567 systemd[1]: Successfully loaded SELinux policy in 111.881ms. Jul 2 07:57:05.067608 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.501ms. Jul 2 07:57:05.067635 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:57:05.067694 systemd[1]: Detected virtualization kvm. Jul 2 07:57:05.067719 systemd[1]: Detected architecture x86-64. Jul 2 07:57:05.067755 systemd[1]: Detected first boot. Jul 2 07:57:05.068630 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:57:05.069349 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:57:05.069392 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:57:05.069430 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:57:05.069458 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:57:05.069499 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:57:05.069533 kernel: kauditd_printk_skb: 49 callbacks suppressed Jul 2 07:57:05.069558 kernel: audit: type=1334 audit(1719907024.119:86): prog-id=12 op=LOAD Jul 2 07:57:05.069587 kernel: audit: type=1334 audit(1719907024.119:87): prog-id=3 op=UNLOAD Jul 2 07:57:05.069609 kernel: audit: type=1334 audit(1719907024.125:88): prog-id=13 op=LOAD Jul 2 07:57:05.069633 kernel: audit: type=1334 audit(1719907024.131:89): prog-id=14 op=LOAD Jul 2 07:57:05.069674 kernel: audit: type=1334 audit(1719907024.132:90): prog-id=4 op=UNLOAD Jul 2 07:57:05.069696 kernel: audit: type=1334 audit(1719907024.132:91): prog-id=5 op=UNLOAD Jul 2 07:57:05.069718 kernel: audit: type=1334 audit(1719907024.140:92): prog-id=15 op=LOAD Jul 2 07:57:05.069741 kernel: audit: type=1334 audit(1719907024.140:93): prog-id=12 op=UNLOAD Jul 2 07:57:05.069762 kernel: audit: type=1334 audit(1719907024.147:94): prog-id=16 op=LOAD Jul 2 07:57:05.069790 kernel: audit: type=1334 audit(1719907024.154:95): prog-id=17 op=LOAD Jul 2 07:57:05.069814 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:57:05.069841 systemd[1]: Stopped iscsiuio.service. Jul 2 07:57:05.069863 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:57:05.069884 systemd[1]: Stopped iscsid.service. Jul 2 07:57:05.069913 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:57:05.069936 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:57:05.069960 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:57:05.069989 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:57:05.070012 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:57:05.070034 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 07:57:05.070056 systemd[1]: Created slice system-getty.slice. Jul 2 07:57:05.070076 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:57:05.070098 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:57:05.070128 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:57:05.070151 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:57:05.070178 systemd[1]: Created slice user.slice. Jul 2 07:57:05.070201 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:57:05.070224 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:57:05.070247 systemd[1]: Set up automount boot.automount. Jul 2 07:57:05.070269 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:57:05.070314 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:57:05.070336 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:57:05.071310 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:57:05.071342 systemd[1]: Reached target integritysetup.target. Jul 2 07:57:05.071371 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:57:05.071398 systemd[1]: Reached target remote-fs.target. Jul 2 07:57:05.071421 systemd[1]: Reached target slices.target. Jul 2 07:57:05.071443 systemd[1]: Reached target swap.target. Jul 2 07:57:05.071465 systemd[1]: Reached target torcx.target. Jul 2 07:57:05.071495 systemd[1]: Reached target veritysetup.target. Jul 2 07:57:05.071518 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:57:05.071540 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:57:05.071562 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:57:05.071585 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:57:05.071611 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:57:05.071634 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:57:05.071677 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:57:05.071700 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:57:05.071722 systemd[1]: Mounting media.mount... Jul 2 07:57:05.071746 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:57:05.071769 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:57:05.071791 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:57:05.071813 systemd[1]: Mounting tmp.mount... Jul 2 07:57:05.071843 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:57:05.071866 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:57:05.071889 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:57:05.071911 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:57:05.071933 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:57:05.071968 systemd[1]: Starting modprobe@drm.service... Jul 2 07:57:05.071991 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:57:05.072014 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:57:05.072036 systemd[1]: Starting modprobe@loop.service... Jul 2 07:57:05.072061 kernel: fuse: init (API version 7.34) Jul 2 07:57:05.072083 kernel: loop: module loaded Jul 2 07:57:05.072112 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:57:05.072135 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:57:05.072158 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:57:05.072180 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:57:05.072203 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:57:05.072226 systemd[1]: Stopped systemd-journald.service. Jul 2 07:57:05.072248 systemd[1]: Starting systemd-journald.service... Jul 2 07:57:05.072274 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:57:05.072296 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:57:05.072318 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:57:05.072348 systemd-journald[995]: Journal started Jul 2 07:57:05.072435 systemd-journald[995]: Runtime Journal (/run/log/journal/986b721d9fd27ab4107eb79666d25dc1) is 8.0M, max 148.8M, 140.8M free. Jul 2 07:57:00.092000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:57:00.480000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:57:00.640000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:57:00.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:57:00.640000 audit: BPF prog-id=10 op=LOAD Jul 2 07:57:00.640000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:57:00.640000 audit: BPF prog-id=11 op=LOAD Jul 2 07:57:00.640000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:57:00.792000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:57:00.792000 audit[904]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:57:00.792000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:57:00.803000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:57:00.803000 audit[904]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:57:00.803000 audit: CWD cwd="/" Jul 2 07:57:00.803000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:00.803000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:00.803000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:57:04.119000 audit: BPF prog-id=12 op=LOAD Jul 2 07:57:04.119000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:57:04.125000 audit: BPF prog-id=13 op=LOAD Jul 2 07:57:04.131000 audit: BPF prog-id=14 op=LOAD Jul 2 07:57:04.132000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:57:04.132000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:57:04.140000 audit: BPF prog-id=15 op=LOAD Jul 2 07:57:04.140000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:57:04.147000 audit: BPF prog-id=16 op=LOAD Jul 2 07:57:04.154000 audit: BPF prog-id=17 op=LOAD Jul 2 07:57:04.154000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:57:04.154000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:57:04.161000 audit: BPF prog-id=18 op=LOAD Jul 2 07:57:04.161000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:57:04.168000 audit: BPF prog-id=19 op=LOAD Jul 2 07:57:04.183000 audit: BPF prog-id=20 op=LOAD Jul 2 07:57:04.183000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:57:04.183000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:57:04.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:04.210000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:57:04.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:04.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:04.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:04.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:04.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.017000 audit: BPF prog-id=21 op=LOAD Jul 2 07:57:05.017000 audit: BPF prog-id=22 op=LOAD Jul 2 07:57:05.017000 audit: BPF prog-id=23 op=LOAD Jul 2 07:57:05.017000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:57:05.017000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:57:05.062000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:57:05.062000 audit[995]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd13bd5ac0 a2=4000 a3=7ffd13bd5b5c items=0 ppid=1 pid=995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:57:05.062000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:57:04.118755 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:57:00.786943 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:57:04.193616 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:57:00.788676 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:57:00.788718 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:57:00.788777 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:57:00.788799 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:57:00.788858 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:57:00.788883 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:57:00.789206 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:57:00.789284 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:57:00.789309 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:57:00.792636 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:57:00.792722 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:57:00.792758 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:57:00.792786 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:57:00.792821 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:57:00.792843 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:57:03.462796 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:03Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:57:03.463088 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:03Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:57:03.463245 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:03Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:57:03.463487 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:03Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:57:03.463548 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:03Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:57:03.463635 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T07:57:03Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:57:05.091698 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:57:05.105709 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:57:05.111714 systemd[1]: Stopped verity-setup.service. Jul 2 07:57:05.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.130697 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:57:05.140707 systemd[1]: Started systemd-journald.service. Jul 2 07:57:05.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.150152 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:57:05.157054 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:57:05.164062 systemd[1]: Mounted media.mount. Jul 2 07:57:05.172061 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:57:05.181075 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:57:05.190148 systemd[1]: Mounted tmp.mount. Jul 2 07:57:05.197196 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:57:05.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.206286 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:57:05.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.215283 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:57:05.215505 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:57:05.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.224331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:57:05.224560 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:57:05.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.233275 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:57:05.233487 systemd[1]: Finished modprobe@drm.service. Jul 2 07:57:05.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.242321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:57:05.242580 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:57:05.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.251318 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:57:05.251549 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:57:05.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.260312 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:57:05.260536 systemd[1]: Finished modprobe@loop.service. Jul 2 07:57:05.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.269310 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:57:05.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.278332 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:57:05.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.287292 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:57:05.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.297274 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:57:05.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.306576 systemd[1]: Reached target network-pre.target. Jul 2 07:57:05.316368 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:57:05.326410 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:57:05.333862 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:57:05.338501 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:57:05.347815 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:57:05.370643 systemd-journald[995]: Time spent on flushing to /var/log/journal/986b721d9fd27ab4107eb79666d25dc1 is 51.019ms for 1135 entries. Jul 2 07:57:05.370643 systemd-journald[995]: System Journal (/var/log/journal/986b721d9fd27ab4107eb79666d25dc1) is 8.0M, max 584.8M, 576.8M free. Jul 2 07:57:05.467739 systemd-journald[995]: Received client request to flush runtime journal. Jul 2 07:57:05.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.356872 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:57:05.358745 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:57:05.365877 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:57:05.367785 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:57:05.470621 udevadm[1009]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:57:05.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:05.385780 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:57:05.394700 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:57:05.405363 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:57:05.414020 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:57:05.424356 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:57:05.438770 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:57:05.447108 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:57:05.461972 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:57:05.471443 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:57:05.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:06.091288 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:57:06.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:06.099000 audit: BPF prog-id=24 op=LOAD Jul 2 07:57:06.099000 audit: BPF prog-id=25 op=LOAD Jul 2 07:57:06.099000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:57:06.099000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:57:06.101862 systemd[1]: Starting systemd-udevd.service... Jul 2 07:57:06.124895 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Jul 2 07:57:06.183990 systemd[1]: Started systemd-udevd.service. Jul 2 07:57:06.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:06.194000 audit: BPF prog-id=26 op=LOAD Jul 2 07:57:06.197306 systemd[1]: Starting systemd-networkd.service... Jul 2 07:57:06.214000 audit: BPF prog-id=27 op=LOAD Jul 2 07:57:06.214000 audit: BPF prog-id=28 op=LOAD Jul 2 07:57:06.214000 audit: BPF prog-id=29 op=LOAD Jul 2 07:57:06.217546 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:57:06.255248 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:57:06.297644 systemd[1]: Started systemd-userdbd.service. Jul 2 07:57:06.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:06.361686 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:57:06.446585 systemd-networkd[1024]: lo: Link UP Jul 2 07:57:06.446603 systemd-networkd[1024]: lo: Gained carrier Jul 2 07:57:06.447463 systemd-networkd[1024]: Enumeration completed Jul 2 07:57:06.447771 systemd[1]: Started systemd-networkd.service. Jul 2 07:57:06.448676 systemd-networkd[1024]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:57:06.450969 systemd-networkd[1024]: eth0: Link UP Jul 2 07:57:06.450988 systemd-networkd[1024]: eth0: Gained carrier Jul 2 07:57:06.457401 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:57:06.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:06.465961 systemd-networkd[1024]: eth0: DHCPv4 address 10.128.0.79/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 2 07:57:06.449000 audit[1022]: AVC avc: denied { confidentiality } for pid=1022 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:57:06.449000 audit[1022]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56351b86db60 a1=3207c a2=7f1345baebc5 a3=5 items=108 ppid=1014 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:57:06.449000 audit: CWD cwd="/" Jul 2 07:57:06.449000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=1 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=2 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=3 name=(null) inode=13141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=4 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=5 name=(null) inode=13142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=6 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=7 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=8 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=9 name=(null) inode=13144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=10 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=11 name=(null) inode=13145 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=12 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=13 name=(null) inode=13146 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=14 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=15 name=(null) inode=13147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=16 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=17 name=(null) inode=13148 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=18 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=19 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=20 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=21 name=(null) inode=13150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=22 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=23 name=(null) inode=13151 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=24 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=25 name=(null) inode=13152 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=26 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=27 name=(null) inode=13153 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=28 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=29 name=(null) inode=13154 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=30 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=31 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=32 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=33 name=(null) inode=13156 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=34 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=35 name=(null) inode=13157 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=36 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=37 name=(null) inode=13158 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=38 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=39 name=(null) inode=13159 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=40 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=41 name=(null) inode=13160 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=42 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=43 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=44 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=45 name=(null) inode=13162 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=46 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=47 name=(null) inode=13163 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=48 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=49 name=(null) inode=13164 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=50 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=51 name=(null) inode=13165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=52 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=53 name=(null) inode=13166 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=55 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=56 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=57 name=(null) inode=13168 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=58 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=59 name=(null) inode=13169 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=60 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=61 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=62 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=63 name=(null) inode=13171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=64 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=65 name=(null) inode=13172 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=66 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=67 name=(null) inode=13173 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=68 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=69 name=(null) inode=13174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=70 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=71 name=(null) inode=13175 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=72 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=73 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=74 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=75 name=(null) inode=13177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=76 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=77 name=(null) inode=13178 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=78 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=79 name=(null) inode=13179 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=80 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=81 name=(null) inode=13180 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=82 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=83 name=(null) inode=13181 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=84 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=85 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=86 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=87 name=(null) inode=13183 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=88 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=89 name=(null) inode=13184 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=90 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=91 name=(null) inode=13185 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=92 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=93 name=(null) inode=13186 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=94 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=95 name=(null) inode=13187 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=96 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=97 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=98 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=99 name=(null) inode=13189 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=100 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=101 name=(null) inode=13190 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=102 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=103 name=(null) inode=13191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=104 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=105 name=(null) inode=13192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=106 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PATH item=107 name=(null) inode=13193 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:57:06.449000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:57:06.499696 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1041) Jul 2 07:57:06.542683 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jul 2 07:57:06.571048 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 2 07:57:06.571417 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:57:06.577688 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 07:57:06.585531 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:57:06.603683 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 07:57:06.621687 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:57:06.636290 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:57:06.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:06.646676 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:57:06.679853 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:57:06.711223 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:57:06.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:06.720074 systemd[1]: Reached target cryptsetup.target. Jul 2 07:57:06.730710 systemd[1]: Starting lvm2-activation.service... Jul 2 07:57:06.735968 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:57:06.764138 systemd[1]: Finished lvm2-activation.service. Jul 2 07:57:06.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:06.773088 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:57:06.781890 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:57:06.781944 systemd[1]: Reached target local-fs.target. Jul 2 07:57:06.790867 systemd[1]: Reached target machines.target. Jul 2 07:57:06.801633 systemd[1]: Starting ldconfig.service... Jul 2 07:57:06.810634 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:57:06.810769 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:57:06.812642 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:57:06.821685 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:57:06.834976 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:57:06.837575 systemd[1]: Starting systemd-sysext.service... Jul 2 07:57:06.839126 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1054 (bootctl) Jul 2 07:57:06.841315 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:57:06.856224 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:57:06.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:06.873440 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:57:06.883799 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:57:06.884038 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:57:06.913694 kernel: loop0: detected capacity change from 0 to 210664 Jul 2 07:57:07.006885 systemd-fsck[1063]: fsck.fat 4.2 (2021-01-31) Jul 2 07:57:07.006885 systemd-fsck[1063]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 07:57:07.009726 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:57:07.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.022320 systemd[1]: Mounting boot.mount... Jul 2 07:57:07.066353 systemd[1]: Mounted boot.mount. Jul 2 07:57:07.082433 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:57:07.083393 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:57:07.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.093805 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:57:07.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.112055 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:57:07.139703 kernel: loop1: detected capacity change from 0 to 210664 Jul 2 07:57:07.165127 (sd-sysext)[1068]: Using extensions 'kubernetes'. Jul 2 07:57:07.167523 (sd-sysext)[1068]: Merged extensions into '/usr'. Jul 2 07:57:07.192741 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:57:07.195495 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:57:07.201615 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:57:07.204092 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:57:07.212878 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:57:07.223029 systemd[1]: Starting modprobe@loop.service... Jul 2 07:57:07.229940 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:57:07.230343 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:57:07.230605 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:57:07.235929 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:57:07.243535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:57:07.243784 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:57:07.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.253450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:57:07.253841 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:57:07.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.263878 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:57:07.264088 systemd[1]: Finished modprobe@loop.service. Jul 2 07:57:07.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.275087 systemd[1]: Finished systemd-sysext.service. Jul 2 07:57:07.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.286934 systemd[1]: Starting ensure-sysext.service... Jul 2 07:57:07.294897 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:57:07.295008 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:57:07.296926 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:57:07.310546 systemd[1]: Reloading. Jul 2 07:57:07.345265 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:57:07.358110 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:57:07.372197 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:57:07.477345 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-07-02T07:57:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:57:07.483779 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-07-02T07:57:07Z" level=info msg="torcx already run" Jul 2 07:57:07.536804 ldconfig[1053]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:57:07.633353 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:57:07.633680 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:57:07.673767 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:57:07.759000 audit: BPF prog-id=30 op=LOAD Jul 2 07:57:07.759000 audit: BPF prog-id=26 op=UNLOAD Jul 2 07:57:07.760000 audit: BPF prog-id=31 op=LOAD Jul 2 07:57:07.760000 audit: BPF prog-id=27 op=UNLOAD Jul 2 07:57:07.760000 audit: BPF prog-id=32 op=LOAD Jul 2 07:57:07.761000 audit: BPF prog-id=33 op=LOAD Jul 2 07:57:07.761000 audit: BPF prog-id=28 op=UNLOAD Jul 2 07:57:07.761000 audit: BPF prog-id=29 op=UNLOAD Jul 2 07:57:07.761000 audit: BPF prog-id=34 op=LOAD Jul 2 07:57:07.761000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:57:07.762000 audit: BPF prog-id=35 op=LOAD Jul 2 07:57:07.762000 audit: BPF prog-id=36 op=LOAD Jul 2 07:57:07.762000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:57:07.762000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:57:07.763000 audit: BPF prog-id=37 op=LOAD Jul 2 07:57:07.763000 audit: BPF prog-id=38 op=LOAD Jul 2 07:57:07.763000 audit: BPF prog-id=24 op=UNLOAD Jul 2 07:57:07.763000 audit: BPF prog-id=25 op=UNLOAD Jul 2 07:57:07.769627 systemd[1]: Finished ldconfig.service. Jul 2 07:57:07.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.778945 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:57:07.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.794890 systemd[1]: Starting audit-rules.service... Jul 2 07:57:07.803899 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:57:07.815422 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:57:07.826254 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:57:07.834000 audit: BPF prog-id=39 op=LOAD Jul 2 07:57:07.838188 systemd[1]: Starting systemd-resolved.service... Jul 2 07:57:07.838780 systemd-networkd[1024]: eth0: Gained IPv6LL Jul 2 07:57:07.844000 audit: BPF prog-id=40 op=LOAD Jul 2 07:57:07.848169 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:57:07.857306 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:57:07.867372 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:57:07.865000 audit[1163]: SYSTEM_BOOT pid=1163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.876988 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:57:07.877245 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:57:07.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:57:07.897873 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:57:07.898434 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:57:07.901060 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:57:07.910283 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:57:07.920096 systemd[1]: Starting modprobe@loop.service... Jul 2 07:57:07.920000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:57:07.920000 audit[1169]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcd68a2a60 a2=420 a3=0 items=0 ppid=1139 pid=1169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:57:07.920000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:57:07.922335 augenrules[1169]: No rules Jul 2 07:57:07.929032 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:57:07.936830 enable-oslogin[1177]: /etc/pam.d/sshd already exists. Not enabling OS Login Jul 2 07:57:07.937885 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:57:07.938156 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:57:07.938365 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:57:07.938536 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:57:07.941067 systemd[1]: Finished audit-rules.service. Jul 2 07:57:07.948725 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:57:07.959605 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:57:07.968581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:57:07.968818 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:57:07.977686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:57:07.977906 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:57:07.987585 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:57:07.987823 systemd[1]: Finished modprobe@loop.service. Jul 2 07:57:07.997371 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:57:07.997614 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:57:08.010285 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:57:08.010784 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:57:08.014370 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:57:08.023987 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:57:08.025620 systemd-resolved[1155]: Positive Trust Anchors: Jul 2 07:57:08.026115 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:57:08.026272 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:57:08.469082 systemd-timesyncd[1158]: Contacted time server 169.254.169.254:123 (169.254.169.254). Jul 2 07:57:08.469694 systemd-timesyncd[1158]: Initial clock synchronization to Tue 2024-07-02 07:57:08.468963 UTC. Jul 2 07:57:08.474961 systemd[1]: Starting modprobe@loop.service... Jul 2 07:57:08.483950 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:57:08.489750 enable-oslogin[1182]: /etc/pam.d/sshd already exists. Not enabling OS Login Jul 2 07:57:08.492032 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:57:08.492372 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:57:08.494996 systemd[1]: Starting systemd-update-done.service... Jul 2 07:57:08.501970 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:57:08.502245 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:57:08.504529 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:57:08.512735 systemd-resolved[1155]: Defaulting to hostname 'linux'. Jul 2 07:57:08.514667 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:57:08.514918 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:57:08.524515 systemd[1]: Started systemd-resolved.service. Jul 2 07:57:08.533586 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:57:08.533847 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:57:08.543609 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:57:08.543859 systemd[1]: Finished modprobe@loop.service. Jul 2 07:57:08.553666 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:57:08.553934 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:57:08.563702 systemd[1]: Finished systemd-update-done.service. Jul 2 07:57:08.572939 systemd[1]: Reached target network.target. Jul 2 07:57:08.582206 systemd[1]: Reached target nss-lookup.target. Jul 2 07:57:08.591149 systemd[1]: Reached target time-set.target. Jul 2 07:57:08.600134 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:57:08.600443 systemd[1]: Reached target sysinit.target. Jul 2 07:57:08.609404 systemd[1]: Started motdgen.path. Jul 2 07:57:08.617257 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:57:08.628516 systemd[1]: Started logrotate.timer. Jul 2 07:57:08.636439 systemd[1]: Started mdadm.timer. Jul 2 07:57:08.644338 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:57:08.653189 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:57:08.653588 systemd[1]: Reached target paths.target. Jul 2 07:57:08.661186 systemd[1]: Reached target timers.target. Jul 2 07:57:08.669869 systemd[1]: Listening on dbus.socket. Jul 2 07:57:08.678982 systemd[1]: Starting docker.socket... Jul 2 07:57:08.691039 systemd[1]: Listening on sshd.socket. Jul 2 07:57:08.698267 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:57:08.698601 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:57:08.701583 systemd[1]: Listening on docker.socket. Jul 2 07:57:08.711823 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:57:08.712006 systemd[1]: Reached target sockets.target. Jul 2 07:57:08.721143 systemd[1]: Reached target basic.target. Jul 2 07:57:08.728156 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:57:08.728474 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:57:08.730516 systemd[1]: Starting containerd.service... Jul 2 07:57:08.741206 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 07:57:08.754660 systemd[1]: Starting dbus.service... Jul 2 07:57:08.763277 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:57:08.772941 systemd[1]: Starting extend-filesystems.service... Jul 2 07:57:08.777797 jq[1189]: false Jul 2 07:57:08.780929 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:57:08.783425 systemd[1]: Starting modprobe@drm.service... Jul 2 07:57:08.792969 systemd[1]: Starting motdgen.service... Jul 2 07:57:08.802781 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:57:08.812370 systemd[1]: Starting sshd-keygen.service... Jul 2 07:57:08.822305 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:57:08.830939 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:57:08.831244 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jul 2 07:57:08.832160 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:57:08.833832 systemd[1]: Starting update-engine.service... Jul 2 07:57:08.843161 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:57:08.849330 jq[1206]: true Jul 2 07:57:08.857546 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:57:08.857853 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:57:08.860073 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:57:08.860298 systemd[1]: Finished modprobe@drm.service. Jul 2 07:57:08.869750 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:57:08.870024 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:57:08.873394 extend-filesystems[1190]: Found loop1 Jul 2 07:57:08.885800 extend-filesystems[1190]: Found sda Jul 2 07:57:08.885800 extend-filesystems[1190]: Found sda1 Jul 2 07:57:08.885800 extend-filesystems[1190]: Found sda2 Jul 2 07:57:08.885800 extend-filesystems[1190]: Found sda3 Jul 2 07:57:08.885800 extend-filesystems[1190]: Found usr Jul 2 07:57:08.885800 extend-filesystems[1190]: Found sda4 Jul 2 07:57:08.885800 extend-filesystems[1190]: Found sda6 Jul 2 07:57:08.885800 extend-filesystems[1190]: Found sda7 Jul 2 07:57:08.885800 extend-filesystems[1190]: Found sda9 Jul 2 07:57:08.885800 extend-filesystems[1190]: Checking size of /dev/sda9 Jul 2 07:57:08.879893 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:57:09.002874 update_engine[1204]: I0702 07:57:08.984385 1204 main.cc:92] Flatcar Update Engine starting Jul 2 07:57:09.005414 extend-filesystems[1190]: Resized partition /dev/sda9 Jul 2 07:57:08.896922 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:57:08.897187 systemd[1]: Finished motdgen.service. Jul 2 07:57:09.013273 jq[1215]: true Jul 2 07:57:08.915280 systemd[1]: Reached target network-online.target. Jul 2 07:57:08.930754 systemd[1]: Starting kubelet.service... Jul 2 07:57:08.946240 systemd[1]: Starting oem-gce.service... Jul 2 07:57:08.954344 systemd[1]: Starting systemd-logind.service... Jul 2 07:57:08.962050 systemd[1]: Finished ensure-sysext.service. Jul 2 07:57:09.015064 mkfs.ext4[1230]: mke2fs 1.46.5 (30-Dec-2021) Jul 2 07:57:09.015064 mkfs.ext4[1230]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Jul 2 07:57:09.015064 mkfs.ext4[1230]: Creating filesystem with 262144 4k blocks and 65536 inodes Jul 2 07:57:09.015064 mkfs.ext4[1230]: Filesystem UUID: d6bc64d9-280f-4f78-abd2-c5fb23edde51 Jul 2 07:57:09.015064 mkfs.ext4[1230]: Superblock backups stored on blocks: Jul 2 07:57:09.015064 mkfs.ext4[1230]: 32768, 98304, 163840, 229376 Jul 2 07:57:09.015064 mkfs.ext4[1230]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:57:09.015064 mkfs.ext4[1230]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:57:09.015064 mkfs.ext4[1230]: Creating journal (8192 blocks): done Jul 2 07:57:09.016292 mkfs.ext4[1230]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:57:09.027597 dbus-daemon[1188]: [system] SELinux support is enabled Jul 2 07:57:09.028369 systemd[1]: Started dbus.service. Jul 2 07:57:09.034349 extend-filesystems[1232]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:57:09.037881 dbus-daemon[1188]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1024 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 07:57:09.040233 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:57:09.040281 systemd[1]: Reached target system-config.target. Jul 2 07:57:09.048436 dbus-daemon[1188]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 07:57:09.058795 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jul 2 07:57:09.061006 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:57:09.061056 systemd[1]: Reached target user-config.target. Jul 2 07:57:09.064444 update_engine[1204]: I0702 07:57:09.064285 1204 update_check_scheduler.cc:74] Next update check in 10m8s Jul 2 07:57:09.075833 systemd[1]: Started update-engine.service. Jul 2 07:57:09.087155 systemd[1]: Started locksmithd.service. Jul 2 07:57:09.098489 umount[1249]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Jul 2 07:57:09.099992 systemd[1]: Starting systemd-hostnamed.service... Jul 2 07:57:09.117791 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jul 2 07:57:09.131798 extend-filesystems[1232]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 2 07:57:09.131798 extend-filesystems[1232]: old_desc_blocks = 1, new_desc_blocks = 2 Jul 2 07:57:09.131798 extend-filesystems[1232]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jul 2 07:57:09.207835 kernel: loop2: detected capacity change from 0 to 2097152 Jul 2 07:57:09.207893 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:57:09.132936 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:57:09.208039 bash[1248]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:57:09.208182 env[1216]: time="2024-07-02T07:57:09.201347687Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:57:09.208499 extend-filesystems[1190]: Resized filesystem in /dev/sda9 Jul 2 07:57:09.133261 systemd[1]: Finished extend-filesystems.service. Jul 2 07:57:09.165636 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:57:09.236115 coreos-metadata[1187]: Jul 02 07:57:09.235 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jul 2 07:57:09.241918 coreos-metadata[1187]: Jul 02 07:57:09.241 INFO Fetch failed with 404: resource not found Jul 2 07:57:09.242093 coreos-metadata[1187]: Jul 02 07:57:09.241 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jul 2 07:57:09.243301 coreos-metadata[1187]: Jul 02 07:57:09.243 INFO Fetch successful Jul 2 07:57:09.243435 coreos-metadata[1187]: Jul 02 07:57:09.243 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jul 2 07:57:09.257539 coreos-metadata[1187]: Jul 02 07:57:09.257 INFO Fetch failed with 404: resource not found Jul 2 07:57:09.257798 coreos-metadata[1187]: Jul 02 07:57:09.257 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jul 2 07:57:09.259072 coreos-metadata[1187]: Jul 02 07:57:09.259 INFO Fetch failed with 404: resource not found Jul 2 07:57:09.259183 coreos-metadata[1187]: Jul 02 07:57:09.259 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jul 2 07:57:09.260711 coreos-metadata[1187]: Jul 02 07:57:09.260 INFO Fetch successful Jul 2 07:57:09.263294 unknown[1187]: wrote ssh authorized keys file for user: core Jul 2 07:57:09.317035 update-ssh-keys[1259]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:57:09.317809 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 07:57:09.341230 systemd-logind[1219]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:57:09.341272 systemd-logind[1219]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 07:57:09.341304 systemd-logind[1219]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:57:09.341563 systemd-logind[1219]: New seat seat0. Jul 2 07:57:09.347227 systemd[1]: Started systemd-logind.service. Jul 2 07:57:09.423944 env[1216]: time="2024-07-02T07:57:09.423834052Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:57:09.424114 env[1216]: time="2024-07-02T07:57:09.424079716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:57:09.431883 env[1216]: time="2024-07-02T07:57:09.431810596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:57:09.431883 env[1216]: time="2024-07-02T07:57:09.431880938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:57:09.432302 env[1216]: time="2024-07-02T07:57:09.432258157Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:57:09.432302 env[1216]: time="2024-07-02T07:57:09.432301832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:57:09.432459 env[1216]: time="2024-07-02T07:57:09.432325337Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:57:09.432459 env[1216]: time="2024-07-02T07:57:09.432341596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:57:09.432556 env[1216]: time="2024-07-02T07:57:09.432504555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:57:09.432902 env[1216]: time="2024-07-02T07:57:09.432870408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:57:09.433180 env[1216]: time="2024-07-02T07:57:09.433144864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:57:09.433864 env[1216]: time="2024-07-02T07:57:09.433180347Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:57:09.433864 env[1216]: time="2024-07-02T07:57:09.433261766Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:57:09.433864 env[1216]: time="2024-07-02T07:57:09.433281759Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:57:09.437849 dbus-daemon[1188]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 07:57:09.438051 systemd[1]: Started systemd-hostnamed.service. Jul 2 07:57:09.440649 dbus-daemon[1188]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1251 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 07:57:09.451854 systemd[1]: Starting polkit.service... Jul 2 07:57:09.454198 env[1216]: time="2024-07-02T07:57:09.454086677Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:57:09.454370 env[1216]: time="2024-07-02T07:57:09.454222285Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:57:09.454370 env[1216]: time="2024-07-02T07:57:09.454244972Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:57:09.454370 env[1216]: time="2024-07-02T07:57:09.454321958Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:57:09.454516 env[1216]: time="2024-07-02T07:57:09.454346314Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:57:09.454516 env[1216]: time="2024-07-02T07:57:09.454464291Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:57:09.454516 env[1216]: time="2024-07-02T07:57:09.454490372Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:57:09.454660 env[1216]: time="2024-07-02T07:57:09.454514675Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:57:09.454660 env[1216]: time="2024-07-02T07:57:09.454562486Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:57:09.454660 env[1216]: time="2024-07-02T07:57:09.454587753Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:57:09.454660 env[1216]: time="2024-07-02T07:57:09.454628855Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:57:09.454660 env[1216]: time="2024-07-02T07:57:09.454652871Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:57:09.455096 env[1216]: time="2024-07-02T07:57:09.455063411Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:57:09.455302 env[1216]: time="2024-07-02T07:57:09.455255539Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:57:09.455958 env[1216]: time="2024-07-02T07:57:09.455920042Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:57:09.456061 env[1216]: time="2024-07-02T07:57:09.455995507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.456061 env[1216]: time="2024-07-02T07:57:09.456023402Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:57:09.456439 env[1216]: time="2024-07-02T07:57:09.456129278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.456540 env[1216]: time="2024-07-02T07:57:09.456485719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.456595 env[1216]: time="2024-07-02T07:57:09.456517663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.456595 env[1216]: time="2024-07-02T07:57:09.456559358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.456595 env[1216]: time="2024-07-02T07:57:09.456583723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.457342 env[1216]: time="2024-07-02T07:57:09.457302245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.458010 env[1216]: time="2024-07-02T07:57:09.457354201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.458102 env[1216]: time="2024-07-02T07:57:09.458018377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.458102 env[1216]: time="2024-07-02T07:57:09.458073669Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:57:09.458356 env[1216]: time="2024-07-02T07:57:09.458329145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.458431 env[1216]: time="2024-07-02T07:57:09.458363906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.458431 env[1216]: time="2024-07-02T07:57:09.458407875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.458528 env[1216]: time="2024-07-02T07:57:09.458429963Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:57:09.458528 env[1216]: time="2024-07-02T07:57:09.458486276Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:57:09.458528 env[1216]: time="2024-07-02T07:57:09.458509583Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:57:09.464101 env[1216]: time="2024-07-02T07:57:09.462268009Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:57:09.464101 env[1216]: time="2024-07-02T07:57:09.462390473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:57:09.464300 env[1216]: time="2024-07-02T07:57:09.462903476Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:57:09.464300 env[1216]: time="2024-07-02T07:57:09.463048019Z" level=info msg="Connect containerd service" Jul 2 07:57:09.464300 env[1216]: time="2024-07-02T07:57:09.463134388Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:57:09.471691 env[1216]: time="2024-07-02T07:57:09.466233609Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:57:09.471691 env[1216]: time="2024-07-02T07:57:09.466603790Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:57:09.471691 env[1216]: time="2024-07-02T07:57:09.466672812Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:57:09.471691 env[1216]: time="2024-07-02T07:57:09.466748872Z" level=info msg="containerd successfully booted in 0.272361s" Jul 2 07:57:09.466899 systemd[1]: Started containerd.service. Jul 2 07:57:09.475823 env[1216]: time="2024-07-02T07:57:09.475728910Z" level=info msg="Start subscribing containerd event" Jul 2 07:57:09.476606 env[1216]: time="2024-07-02T07:57:09.476570173Z" level=info msg="Start recovering state" Jul 2 07:57:09.476885 env[1216]: time="2024-07-02T07:57:09.476858906Z" level=info msg="Start event monitor" Jul 2 07:57:09.477028 env[1216]: time="2024-07-02T07:57:09.476998855Z" level=info msg="Start snapshots syncer" Jul 2 07:57:09.477134 env[1216]: time="2024-07-02T07:57:09.477111965Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:57:09.477266 env[1216]: time="2024-07-02T07:57:09.477236944Z" level=info msg="Start streaming server" Jul 2 07:57:09.535296 polkitd[1263]: Started polkitd version 121 Jul 2 07:57:09.569343 polkitd[1263]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 07:57:09.569441 polkitd[1263]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 07:57:09.577485 polkitd[1263]: Finished loading, compiling and executing 2 rules Jul 2 07:57:09.579988 dbus-daemon[1188]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 07:57:09.580246 systemd[1]: Started polkit.service. Jul 2 07:57:09.580846 polkitd[1263]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 07:57:09.625888 systemd-hostnamed[1251]: Hostname set to (transient) Jul 2 07:57:09.629359 systemd-resolved[1155]: System hostname changed to 'ci-3510-3-5-781a6bd2055d33013279.c.flatcar-212911.internal'. Jul 2 07:57:11.120262 systemd[1]: Started kubelet.service. Jul 2 07:57:11.288092 sshd_keygen[1212]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:57:11.350679 systemd[1]: Finished sshd-keygen.service. Jul 2 07:57:11.360499 systemd[1]: Starting issuegen.service... Jul 2 07:57:11.380623 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:57:11.380919 systemd[1]: Finished issuegen.service. Jul 2 07:57:11.390690 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:57:11.405701 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:57:11.413232 locksmithd[1250]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:57:11.416623 systemd[1]: Started getty@tty1.service. Jul 2 07:57:11.427731 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:57:11.437430 systemd[1]: Reached target getty.target. Jul 2 07:57:12.283046 kubelet[1276]: E0702 07:57:12.282954 1276 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:57:12.285549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:57:12.285721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:57:12.286161 systemd[1]: kubelet.service: Consumed 1.440s CPU time. Jul 2 07:57:14.367627 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Jul 2 07:57:16.646846 kernel: loop2: detected capacity change from 0 to 2097152 Jul 2 07:57:16.669918 systemd-nspawn[1299]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Jul 2 07:57:16.669918 systemd-nspawn[1299]: Press ^] three times within 1s to kill container. Jul 2 07:57:16.684820 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:57:16.767997 systemd[1]: Started oem-gce.service. Jul 2 07:57:16.768443 systemd[1]: Reached target multi-user.target. Jul 2 07:57:16.770727 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:57:16.781575 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:57:16.781862 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:57:16.785881 systemd[1]: Startup finished in 1.065s (kernel) + 7.518s (initrd) + 15.994s (userspace) = 24.579s. Jul 2 07:57:16.840255 systemd-nspawn[1299]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jul 2 07:57:16.840472 systemd-nspawn[1299]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jul 2 07:57:16.840472 systemd-nspawn[1299]: + /usr/bin/google_instance_setup Jul 2 07:57:17.325145 systemd[1]: Created slice system-sshd.slice. Jul 2 07:57:17.328371 systemd[1]: Started sshd@0-10.128.0.79:22-147.75.109.163:53704.service. Jul 2 07:57:17.635209 instance-setup[1305]: INFO Running google_set_multiqueue. Jul 2 07:57:17.653720 instance-setup[1305]: INFO Set channels for eth0 to 2. Jul 2 07:57:17.658856 instance-setup[1305]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jul 2 07:57:17.660686 instance-setup[1305]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jul 2 07:57:17.664258 instance-setup[1305]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jul 2 07:57:17.664987 instance-setup[1305]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jul 2 07:57:17.665174 instance-setup[1305]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jul 2 07:57:17.667319 instance-setup[1305]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jul 2 07:57:17.667939 instance-setup[1305]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jul 2 07:57:17.669853 instance-setup[1305]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jul 2 07:57:17.672727 sshd[1307]: Accepted publickey for core from 147.75.109.163 port 53704 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:17.677267 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:17.687384 instance-setup[1305]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jul 2 07:57:17.687580 instance-setup[1305]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jul 2 07:57:17.698438 systemd[1]: Created slice user-500.slice. Jul 2 07:57:17.702898 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:57:17.717159 systemd-logind[1219]: New session 1 of user core. Jul 2 07:57:17.727685 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:57:17.731362 systemd[1]: Starting user@500.service... Jul 2 07:57:17.752122 (systemd)[1340]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:17.765977 systemd-nspawn[1299]: + /usr/bin/google_metadata_script_runner --script-type startup Jul 2 07:57:17.921820 systemd[1340]: Queued start job for default target default.target. Jul 2 07:57:17.923634 systemd[1340]: Reached target paths.target. Jul 2 07:57:17.923978 systemd[1340]: Reached target sockets.target. Jul 2 07:57:17.924168 systemd[1340]: Reached target timers.target. Jul 2 07:57:17.924406 systemd[1340]: Reached target basic.target. Jul 2 07:57:17.924667 systemd[1340]: Reached target default.target. Jul 2 07:57:17.924784 systemd[1]: Started user@500.service. Jul 2 07:57:17.925330 systemd[1340]: Startup finished in 161ms. Jul 2 07:57:17.926749 systemd[1]: Started session-1.scope. Jul 2 07:57:18.161491 systemd[1]: Started sshd@1-10.128.0.79:22-147.75.109.163:53716.service. Jul 2 07:57:18.225797 startup-script[1342]: INFO Starting startup scripts. Jul 2 07:57:18.241792 startup-script[1342]: INFO No startup scripts found in metadata. Jul 2 07:57:18.241970 startup-script[1342]: INFO Finished running startup scripts. Jul 2 07:57:18.283015 systemd-nspawn[1299]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jul 2 07:57:18.283015 systemd-nspawn[1299]: + daemon_pids=() Jul 2 07:57:18.283015 systemd-nspawn[1299]: + for d in accounts clock_skew network Jul 2 07:57:18.283479 systemd-nspawn[1299]: + daemon_pids+=($!) Jul 2 07:57:18.283479 systemd-nspawn[1299]: + for d in accounts clock_skew network Jul 2 07:57:18.283645 systemd-nspawn[1299]: + daemon_pids+=($!) Jul 2 07:57:18.283728 systemd-nspawn[1299]: + for d in accounts clock_skew network Jul 2 07:57:18.284020 systemd-nspawn[1299]: + daemon_pids+=($!) Jul 2 07:57:18.284146 systemd-nspawn[1299]: + NOTIFY_SOCKET=/run/systemd/notify Jul 2 07:57:18.284146 systemd-nspawn[1299]: + /usr/bin/systemd-notify --ready Jul 2 07:57:18.284517 systemd-nspawn[1299]: + /usr/bin/google_network_daemon Jul 2 07:57:18.284648 systemd-nspawn[1299]: + /usr/bin/google_clock_skew_daemon Jul 2 07:57:18.285355 systemd-nspawn[1299]: + /usr/bin/google_accounts_daemon Jul 2 07:57:18.364647 systemd-nspawn[1299]: + wait -n 36 37 38 Jul 2 07:57:18.511949 sshd[1352]: Accepted publickey for core from 147.75.109.163 port 53716 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:18.514072 sshd[1352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:18.524296 systemd[1]: Started session-2.scope. Jul 2 07:57:18.527134 systemd-logind[1219]: New session 2 of user core. Jul 2 07:57:18.733416 sshd[1352]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:18.741718 systemd[1]: sshd@1-10.128.0.79:22-147.75.109.163:53716.service: Deactivated successfully. Jul 2 07:57:18.742983 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:57:18.745407 systemd-logind[1219]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:57:18.746950 systemd-logind[1219]: Removed session 2. Jul 2 07:57:18.780917 systemd[1]: Started sshd@2-10.128.0.79:22-147.75.109.163:53730.service. Jul 2 07:57:19.047203 google-networking[1356]: INFO Starting Google Networking daemon. Jul 2 07:57:19.100824 sshd[1362]: Accepted publickey for core from 147.75.109.163 port 53730 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:19.101828 sshd[1362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:19.109974 systemd[1]: Started session-3.scope. Jul 2 07:57:19.110607 systemd-logind[1219]: New session 3 of user core. Jul 2 07:57:19.154442 google-clock-skew[1355]: INFO Starting Google Clock Skew daemon. Jul 2 07:57:19.169924 google-clock-skew[1355]: INFO Clock drift token has changed: 0. Jul 2 07:57:19.176476 systemd-nspawn[1299]: hwclock: Cannot access the Hardware Clock via any known method. Jul 2 07:57:19.177291 systemd-nspawn[1299]: hwclock: Use the --verbose option to see the details of our search for an access method. Jul 2 07:57:19.178327 google-clock-skew[1355]: WARNING Failed to sync system time with hardware clock. Jul 2 07:57:19.192312 groupadd[1372]: group added to /etc/group: name=google-sudoers, GID=1000 Jul 2 07:57:19.197694 groupadd[1372]: group added to /etc/gshadow: name=google-sudoers Jul 2 07:57:19.202753 groupadd[1372]: new group: name=google-sudoers, GID=1000 Jul 2 07:57:19.218846 google-accounts[1354]: INFO Starting Google Accounts daemon. Jul 2 07:57:19.247694 google-accounts[1354]: WARNING OS Login not installed. Jul 2 07:57:19.249237 google-accounts[1354]: INFO Creating a new user account for 0. Jul 2 07:57:19.256476 systemd-nspawn[1299]: useradd: invalid user name '0': use --badname to ignore Jul 2 07:57:19.257323 google-accounts[1354]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jul 2 07:57:19.308072 sshd[1362]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:19.312886 systemd[1]: sshd@2-10.128.0.79:22-147.75.109.163:53730.service: Deactivated successfully. Jul 2 07:57:19.313962 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:57:19.314833 systemd-logind[1219]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:57:19.316268 systemd-logind[1219]: Removed session 3. Jul 2 07:57:19.354455 systemd[1]: Started sshd@3-10.128.0.79:22-147.75.109.163:53742.service. Jul 2 07:57:19.641481 sshd[1386]: Accepted publickey for core from 147.75.109.163 port 53742 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:19.643093 sshd[1386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:19.650266 systemd[1]: Started session-4.scope. Jul 2 07:57:19.650944 systemd-logind[1219]: New session 4 of user core. Jul 2 07:57:19.855078 sshd[1386]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:19.859335 systemd[1]: sshd@3-10.128.0.79:22-147.75.109.163:53742.service: Deactivated successfully. Jul 2 07:57:19.860434 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:57:19.861523 systemd-logind[1219]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:57:19.862847 systemd-logind[1219]: Removed session 4. Jul 2 07:57:19.900800 systemd[1]: Started sshd@4-10.128.0.79:22-147.75.109.163:53748.service. Jul 2 07:57:20.185600 sshd[1392]: Accepted publickey for core from 147.75.109.163 port 53748 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:20.187309 sshd[1392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:20.194714 systemd[1]: Started session-5.scope. Jul 2 07:57:20.195467 systemd-logind[1219]: New session 5 of user core. Jul 2 07:57:20.385625 sudo[1395]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:57:20.386095 sudo[1395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:57:20.407742 systemd[1]: Starting coreos-metadata.service... Jul 2 07:57:20.461475 coreos-metadata[1399]: Jul 02 07:57:20.461 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jul 2 07:57:20.463824 coreos-metadata[1399]: Jul 02 07:57:20.463 INFO Fetch successful Jul 2 07:57:20.463824 coreos-metadata[1399]: Jul 02 07:57:20.463 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jul 2 07:57:20.464789 coreos-metadata[1399]: Jul 02 07:57:20.464 INFO Fetch successful Jul 2 07:57:20.464927 coreos-metadata[1399]: Jul 02 07:57:20.464 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jul 2 07:57:20.465707 coreos-metadata[1399]: Jul 02 07:57:20.465 INFO Fetch successful Jul 2 07:57:20.465707 coreos-metadata[1399]: Jul 02 07:57:20.465 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jul 2 07:57:20.467100 coreos-metadata[1399]: Jul 02 07:57:20.467 INFO Fetch successful Jul 2 07:57:20.478708 systemd[1]: Finished coreos-metadata.service. Jul 2 07:57:21.600384 systemd[1]: Stopped kubelet.service. Jul 2 07:57:21.601173 systemd[1]: kubelet.service: Consumed 1.440s CPU time. Jul 2 07:57:21.604879 systemd[1]: Starting kubelet.service... Jul 2 07:57:21.638961 systemd[1]: Reloading. Jul 2 07:57:21.800249 /usr/lib/systemd/system-generators/torcx-generator[1457]: time="2024-07-02T07:57:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:57:21.800300 /usr/lib/systemd/system-generators/torcx-generator[1457]: time="2024-07-02T07:57:21Z" level=info msg="torcx already run" Jul 2 07:57:21.933887 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:57:21.933918 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:57:21.961936 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:57:22.116114 systemd[1]: Started kubelet.service. Jul 2 07:57:22.127405 systemd[1]: Stopping kubelet.service... Jul 2 07:57:22.128407 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:57:22.128678 systemd[1]: Stopped kubelet.service. Jul 2 07:57:22.131397 systemd[1]: Starting kubelet.service... Jul 2 07:57:22.334999 systemd[1]: Started kubelet.service. Jul 2 07:57:22.406844 kubelet[1513]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:57:22.406844 kubelet[1513]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:57:22.406844 kubelet[1513]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:57:22.409222 kubelet[1513]: I0702 07:57:22.409131 1513 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:57:23.259974 kubelet[1513]: I0702 07:57:23.259917 1513 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 07:57:23.259974 kubelet[1513]: I0702 07:57:23.259956 1513 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:57:23.260314 kubelet[1513]: I0702 07:57:23.260278 1513 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 07:57:23.289541 kubelet[1513]: I0702 07:57:23.289475 1513 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:57:23.307881 kubelet[1513]: I0702 07:57:23.307832 1513 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:57:23.308261 kubelet[1513]: I0702 07:57:23.308200 1513 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:57:23.308506 kubelet[1513]: I0702 07:57:23.308249 1513 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.128.0.79","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:57:23.308691 kubelet[1513]: I0702 07:57:23.308519 1513 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:57:23.308691 kubelet[1513]: I0702 07:57:23.308539 1513 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:57:23.310339 kubelet[1513]: I0702 07:57:23.310292 1513 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:57:23.312129 kubelet[1513]: I0702 07:57:23.312106 1513 kubelet.go:400] "Attempting to sync node with API server" Jul 2 07:57:23.312129 kubelet[1513]: I0702 07:57:23.312132 1513 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:57:23.312296 kubelet[1513]: I0702 07:57:23.312170 1513 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:57:23.312296 kubelet[1513]: I0702 07:57:23.312192 1513 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:57:23.312806 kubelet[1513]: E0702 07:57:23.312772 1513 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:23.312976 kubelet[1513]: E0702 07:57:23.312939 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:23.319127 kubelet[1513]: I0702 07:57:23.319094 1513 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:57:23.321972 kubelet[1513]: I0702 07:57:23.321930 1513 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:57:23.322115 kubelet[1513]: W0702 07:57:23.322013 1513 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:57:23.323136 kubelet[1513]: I0702 07:57:23.322743 1513 server.go:1264] "Started kubelet" Jul 2 07:57:23.341680 kubelet[1513]: I0702 07:57:23.341596 1513 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:57:23.343141 kubelet[1513]: I0702 07:57:23.343060 1513 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:57:23.343573 kubelet[1513]: I0702 07:57:23.343528 1513 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:57:23.351103 kubelet[1513]: I0702 07:57:23.351032 1513 server.go:455] "Adding debug handlers to kubelet server" Jul 2 07:57:23.364126 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:57:23.364632 kubelet[1513]: I0702 07:57:23.364606 1513 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:57:23.368288 kubelet[1513]: I0702 07:57:23.368248 1513 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:57:23.369162 kubelet[1513]: I0702 07:57:23.369137 1513 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 07:57:23.369423 kubelet[1513]: I0702 07:57:23.369407 1513 reconciler.go:26] "Reconciler: start to sync state" Jul 2 07:57:23.369872 kubelet[1513]: I0702 07:57:23.369780 1513 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:57:23.369989 kubelet[1513]: I0702 07:57:23.369917 1513 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:57:23.370134 kubelet[1513]: E0702 07:57:23.369301 1513 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:57:23.374712 kubelet[1513]: E0702 07:57:23.374673 1513 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.79\" not found" node="10.128.0.79" Jul 2 07:57:23.378543 kubelet[1513]: I0702 07:57:23.378515 1513 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:57:23.407052 kubelet[1513]: I0702 07:57:23.407003 1513 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:57:23.407052 kubelet[1513]: I0702 07:57:23.407043 1513 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:57:23.407527 kubelet[1513]: I0702 07:57:23.407070 1513 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:57:23.412864 kubelet[1513]: I0702 07:57:23.412823 1513 policy_none.go:49] "None policy: Start" Jul 2 07:57:23.414546 kubelet[1513]: I0702 07:57:23.414515 1513 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:57:23.414680 kubelet[1513]: I0702 07:57:23.414556 1513 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:57:23.425234 systemd[1]: Created slice kubepods.slice. Jul 2 07:57:23.437708 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:57:23.444439 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:57:23.452259 kubelet[1513]: I0702 07:57:23.452220 1513 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:57:23.453411 kubelet[1513]: I0702 07:57:23.453355 1513 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 07:57:23.453782 kubelet[1513]: I0702 07:57:23.453733 1513 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:57:23.458418 kubelet[1513]: E0702 07:57:23.458379 1513 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.79\" not found" Jul 2 07:57:23.470087 kubelet[1513]: I0702 07:57:23.470057 1513 kubelet_node_status.go:73] "Attempting to register node" node="10.128.0.79" Jul 2 07:57:23.476913 kubelet[1513]: I0702 07:57:23.476781 1513 kubelet_node_status.go:76] "Successfully registered node" node="10.128.0.79" Jul 2 07:57:23.489981 kubelet[1513]: I0702 07:57:23.489942 1513 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 2 07:57:23.490544 env[1216]: time="2024-07-02T07:57:23.490473833Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:57:23.491102 kubelet[1513]: I0702 07:57:23.490870 1513 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 2 07:57:23.554049 kubelet[1513]: I0702 07:57:23.553897 1513 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:57:23.558582 kubelet[1513]: I0702 07:57:23.558539 1513 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:57:23.558815 kubelet[1513]: I0702 07:57:23.558795 1513 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:57:23.559228 kubelet[1513]: I0702 07:57:23.559208 1513 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 07:57:23.559464 kubelet[1513]: E0702 07:57:23.559442 1513 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 07:57:23.856091 sudo[1395]: pam_unix(sudo:session): session closed for user root Jul 2 07:57:23.901105 sshd[1392]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:23.905212 systemd[1]: sshd@4-10.128.0.79:22-147.75.109.163:53748.service: Deactivated successfully. Jul 2 07:57:23.906340 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:57:23.907304 systemd-logind[1219]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:57:23.908646 systemd-logind[1219]: Removed session 5. Jul 2 07:57:24.262373 kubelet[1513]: I0702 07:57:24.262189 1513 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 2 07:57:24.262998 kubelet[1513]: W0702 07:57:24.262933 1513 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 2 07:57:24.263120 kubelet[1513]: W0702 07:57:24.263015 1513 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 2 07:57:24.263120 kubelet[1513]: W0702 07:57:24.263050 1513 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 2 07:57:24.313904 kubelet[1513]: I0702 07:57:24.313855 1513 apiserver.go:52] "Watching apiserver" Jul 2 07:57:24.314185 kubelet[1513]: E0702 07:57:24.313871 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:24.321076 kubelet[1513]: I0702 07:57:24.321014 1513 topology_manager.go:215] "Topology Admit Handler" podUID="5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" podNamespace="kube-system" podName="cilium-h7fgk" Jul 2 07:57:24.321289 kubelet[1513]: I0702 07:57:24.321220 1513 topology_manager.go:215] "Topology Admit Handler" podUID="5bb55093-fee8-4062-92d3-25601a49899e" podNamespace="kube-system" podName="kube-proxy-d25mt" Jul 2 07:57:24.328930 systemd[1]: Created slice kubepods-besteffort-pod5bb55093_fee8_4062_92d3_25601a49899e.slice. Jul 2 07:57:24.340746 systemd[1]: Created slice kubepods-burstable-pod5c02665f_ceb0_4d9e_bef2_d37a9af1d7fc.slice. Jul 2 07:57:24.370491 kubelet[1513]: I0702 07:57:24.370438 1513 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 07:57:24.375425 kubelet[1513]: I0702 07:57:24.375371 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-run\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.375678 kubelet[1513]: I0702 07:57:24.375436 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-cgroup\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.375678 kubelet[1513]: I0702 07:57:24.375467 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-host-proc-sys-net\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.375678 kubelet[1513]: I0702 07:57:24.375493 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-host-proc-sys-kernel\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.375678 kubelet[1513]: I0702 07:57:24.375557 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-hubble-tls\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.375678 kubelet[1513]: I0702 07:57:24.375604 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-bpf-maps\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.375678 kubelet[1513]: I0702 07:57:24.375630 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cni-path\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.376191 kubelet[1513]: I0702 07:57:24.375662 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-lib-modules\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.376191 kubelet[1513]: I0702 07:57:24.375694 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-hostproc\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.376191 kubelet[1513]: I0702 07:57:24.375720 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-xtables-lock\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.376191 kubelet[1513]: I0702 07:57:24.375746 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-config-path\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.376191 kubelet[1513]: I0702 07:57:24.375795 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd2k9\" (UniqueName: \"kubernetes.io/projected/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-kube-api-access-kd2k9\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.376191 kubelet[1513]: I0702 07:57:24.375821 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5bb55093-fee8-4062-92d3-25601a49899e-kube-proxy\") pod \"kube-proxy-d25mt\" (UID: \"5bb55093-fee8-4062-92d3-25601a49899e\") " pod="kube-system/kube-proxy-d25mt" Jul 2 07:57:24.376405 kubelet[1513]: I0702 07:57:24.375865 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-etc-cni-netd\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.376405 kubelet[1513]: I0702 07:57:24.375890 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-clustermesh-secrets\") pod \"cilium-h7fgk\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " pod="kube-system/cilium-h7fgk" Jul 2 07:57:24.376405 kubelet[1513]: I0702 07:57:24.375916 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bb55093-fee8-4062-92d3-25601a49899e-xtables-lock\") pod \"kube-proxy-d25mt\" (UID: \"5bb55093-fee8-4062-92d3-25601a49899e\") " pod="kube-system/kube-proxy-d25mt" Jul 2 07:57:24.376405 kubelet[1513]: I0702 07:57:24.375943 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bb55093-fee8-4062-92d3-25601a49899e-lib-modules\") pod \"kube-proxy-d25mt\" (UID: \"5bb55093-fee8-4062-92d3-25601a49899e\") " pod="kube-system/kube-proxy-d25mt" Jul 2 07:57:24.376405 kubelet[1513]: I0702 07:57:24.375999 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7rf6\" (UniqueName: \"kubernetes.io/projected/5bb55093-fee8-4062-92d3-25601a49899e-kube-api-access-r7rf6\") pod \"kube-proxy-d25mt\" (UID: \"5bb55093-fee8-4062-92d3-25601a49899e\") " pod="kube-system/kube-proxy-d25mt" Jul 2 07:57:24.639856 env[1216]: time="2024-07-02T07:57:24.638253689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d25mt,Uid:5bb55093-fee8-4062-92d3-25601a49899e,Namespace:kube-system,Attempt:0,}" Jul 2 07:57:24.651345 env[1216]: time="2024-07-02T07:57:24.651278501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h7fgk,Uid:5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc,Namespace:kube-system,Attempt:0,}" Jul 2 07:57:25.252198 env[1216]: time="2024-07-02T07:57:25.252120498Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:25.255252 env[1216]: time="2024-07-02T07:57:25.255161468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:25.257326 env[1216]: time="2024-07-02T07:57:25.257263921Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:25.260536 env[1216]: time="2024-07-02T07:57:25.260467624Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:25.262436 env[1216]: time="2024-07-02T07:57:25.262389171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:25.264213 env[1216]: time="2024-07-02T07:57:25.263449772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:25.266559 env[1216]: time="2024-07-02T07:57:25.266494652Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:25.267717 env[1216]: time="2024-07-02T07:57:25.267671001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:25.308402 env[1216]: time="2024-07-02T07:57:25.300895170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:57:25.308402 env[1216]: time="2024-07-02T07:57:25.300949739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:57:25.308402 env[1216]: time="2024-07-02T07:57:25.300982619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:57:25.308402 env[1216]: time="2024-07-02T07:57:25.301543313Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4 pid=1572 runtime=io.containerd.runc.v2 Jul 2 07:57:25.309139 env[1216]: time="2024-07-02T07:57:25.304713871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:57:25.309139 env[1216]: time="2024-07-02T07:57:25.304812894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:57:25.309139 env[1216]: time="2024-07-02T07:57:25.304836905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:57:25.309139 env[1216]: time="2024-07-02T07:57:25.305038496Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea01b77dff1d6139210554fcfe703abb3ef85b0534c7555e6cef7a50147a89fe pid=1573 runtime=io.containerd.runc.v2 Jul 2 07:57:25.315240 kubelet[1513]: E0702 07:57:25.315191 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:25.337929 systemd[1]: Started cri-containerd-cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4.scope. Jul 2 07:57:25.367559 systemd[1]: Started cri-containerd-ea01b77dff1d6139210554fcfe703abb3ef85b0534c7555e6cef7a50147a89fe.scope. Jul 2 07:57:25.408083 env[1216]: time="2024-07-02T07:57:25.408018923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h7fgk,Uid:5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\"" Jul 2 07:57:25.412047 env[1216]: time="2024-07-02T07:57:25.411994121Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:57:25.427673 env[1216]: time="2024-07-02T07:57:25.427483200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d25mt,Uid:5bb55093-fee8-4062-92d3-25601a49899e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea01b77dff1d6139210554fcfe703abb3ef85b0534c7555e6cef7a50147a89fe\"" Jul 2 07:57:25.495367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1468248879.mount: Deactivated successfully. Jul 2 07:57:26.316412 kubelet[1513]: E0702 07:57:26.316267 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:27.317018 kubelet[1513]: E0702 07:57:27.316955 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:28.317805 kubelet[1513]: E0702 07:57:28.317737 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:29.318369 kubelet[1513]: E0702 07:57:29.318316 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:30.318705 kubelet[1513]: E0702 07:57:30.318600 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:30.916215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866142040.mount: Deactivated successfully. Jul 2 07:57:31.319421 kubelet[1513]: E0702 07:57:31.318961 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:32.319439 kubelet[1513]: E0702 07:57:32.319355 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:33.319645 kubelet[1513]: E0702 07:57:33.319561 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:34.304470 env[1216]: time="2024-07-02T07:57:34.304391507Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:34.312328 env[1216]: time="2024-07-02T07:57:34.312267598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:34.314800 env[1216]: time="2024-07-02T07:57:34.314690329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:34.315647 env[1216]: time="2024-07-02T07:57:34.315599889Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:57:34.318491 env[1216]: time="2024-07-02T07:57:34.318408334Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 07:57:34.319875 kubelet[1513]: E0702 07:57:34.319831 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:34.321069 env[1216]: time="2024-07-02T07:57:34.321012208Z" level=info msg="CreateContainer within sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:57:34.340431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount850255767.mount: Deactivated successfully. Jul 2 07:57:34.351895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2869872903.mount: Deactivated successfully. Jul 2 07:57:34.365098 env[1216]: time="2024-07-02T07:57:34.364952306Z" level=info msg="CreateContainer within sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed\"" Jul 2 07:57:34.366534 env[1216]: time="2024-07-02T07:57:34.366497886Z" level=info msg="StartContainer for \"53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed\"" Jul 2 07:57:34.398447 systemd[1]: Started cri-containerd-53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed.scope. Jul 2 07:57:34.451492 env[1216]: time="2024-07-02T07:57:34.451380341Z" level=info msg="StartContainer for \"53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed\" returns successfully" Jul 2 07:57:34.464356 systemd[1]: cri-containerd-53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed.scope: Deactivated successfully. Jul 2 07:57:35.320751 kubelet[1513]: E0702 07:57:35.320694 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:35.335364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed-rootfs.mount: Deactivated successfully. Jul 2 07:57:36.283491 env[1216]: time="2024-07-02T07:57:36.283422796Z" level=info msg="shim disconnected" id=53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed Jul 2 07:57:36.284156 env[1216]: time="2024-07-02T07:57:36.283502077Z" level=warning msg="cleaning up after shim disconnected" id=53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed namespace=k8s.io Jul 2 07:57:36.284156 env[1216]: time="2024-07-02T07:57:36.283518061Z" level=info msg="cleaning up dead shim" Jul 2 07:57:36.297070 env[1216]: time="2024-07-02T07:57:36.297014739Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:57:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1695 runtime=io.containerd.runc.v2\n" Jul 2 07:57:36.321468 kubelet[1513]: E0702 07:57:36.321413 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:36.612108 env[1216]: time="2024-07-02T07:57:36.612036957Z" level=info msg="CreateContainer within sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:57:36.643146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452891725.mount: Deactivated successfully. Jul 2 07:57:36.660066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3967757893.mount: Deactivated successfully. Jul 2 07:57:36.670133 env[1216]: time="2024-07-02T07:57:36.670059477Z" level=info msg="CreateContainer within sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e652525ccb3e37adfcedb7699d2778461d2450d7119cb670264226c8eba7d7e0\"" Jul 2 07:57:36.671432 env[1216]: time="2024-07-02T07:57:36.671378960Z" level=info msg="StartContainer for \"e652525ccb3e37adfcedb7699d2778461d2450d7119cb670264226c8eba7d7e0\"" Jul 2 07:57:36.712540 systemd[1]: Started cri-containerd-e652525ccb3e37adfcedb7699d2778461d2450d7119cb670264226c8eba7d7e0.scope. Jul 2 07:57:36.772013 env[1216]: time="2024-07-02T07:57:36.771948397Z" level=info msg="StartContainer for \"e652525ccb3e37adfcedb7699d2778461d2450d7119cb670264226c8eba7d7e0\" returns successfully" Jul 2 07:57:36.793384 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:57:36.795113 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:57:36.795349 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:57:36.806941 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:57:36.807541 systemd[1]: cri-containerd-e652525ccb3e37adfcedb7699d2778461d2450d7119cb670264226c8eba7d7e0.scope: Deactivated successfully. Jul 2 07:57:36.819611 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:57:36.876692 env[1216]: time="2024-07-02T07:57:36.875677849Z" level=info msg="shim disconnected" id=e652525ccb3e37adfcedb7699d2778461d2450d7119cb670264226c8eba7d7e0 Jul 2 07:57:36.877169 env[1216]: time="2024-07-02T07:57:36.877121134Z" level=warning msg="cleaning up after shim disconnected" id=e652525ccb3e37adfcedb7699d2778461d2450d7119cb670264226c8eba7d7e0 namespace=k8s.io Jul 2 07:57:36.877326 env[1216]: time="2024-07-02T07:57:36.877298616Z" level=info msg="cleaning up dead shim" Jul 2 07:57:36.894063 env[1216]: time="2024-07-02T07:57:36.894006308Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:57:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1759 runtime=io.containerd.runc.v2\n" Jul 2 07:57:37.322396 kubelet[1513]: E0702 07:57:37.322311 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:37.513673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3561179574.mount: Deactivated successfully. Jul 2 07:57:37.621442 env[1216]: time="2024-07-02T07:57:37.620958933Z" level=info msg="CreateContainer within sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:57:37.658510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126998849.mount: Deactivated successfully. Jul 2 07:57:37.670735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1064917842.mount: Deactivated successfully. Jul 2 07:57:37.677212 env[1216]: time="2024-07-02T07:57:37.677143206Z" level=info msg="CreateContainer within sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ded82c49694e96246f559322a5e021aeec378865c7a742fc8373fc345311666d\"" Jul 2 07:57:37.677988 env[1216]: time="2024-07-02T07:57:37.677864597Z" level=info msg="StartContainer for \"ded82c49694e96246f559322a5e021aeec378865c7a742fc8373fc345311666d\"" Jul 2 07:57:37.715584 systemd[1]: Started cri-containerd-ded82c49694e96246f559322a5e021aeec378865c7a742fc8373fc345311666d.scope. Jul 2 07:57:37.789855 systemd[1]: cri-containerd-ded82c49694e96246f559322a5e021aeec378865c7a742fc8373fc345311666d.scope: Deactivated successfully. Jul 2 07:57:37.791532 env[1216]: time="2024-07-02T07:57:37.791463593Z" level=info msg="StartContainer for \"ded82c49694e96246f559322a5e021aeec378865c7a742fc8373fc345311666d\" returns successfully" Jul 2 07:57:38.011438 env[1216]: time="2024-07-02T07:57:38.010698884Z" level=info msg="shim disconnected" id=ded82c49694e96246f559322a5e021aeec378865c7a742fc8373fc345311666d Jul 2 07:57:38.011438 env[1216]: time="2024-07-02T07:57:38.010880288Z" level=warning msg="cleaning up after shim disconnected" id=ded82c49694e96246f559322a5e021aeec378865c7a742fc8373fc345311666d namespace=k8s.io Jul 2 07:57:38.011438 env[1216]: time="2024-07-02T07:57:38.010911007Z" level=info msg="cleaning up dead shim" Jul 2 07:57:38.029830 env[1216]: time="2024-07-02T07:57:38.029742126Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:57:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1820 runtime=io.containerd.runc.v2\n" Jul 2 07:57:38.164039 env[1216]: time="2024-07-02T07:57:38.163961317Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:38.166742 env[1216]: time="2024-07-02T07:57:38.166685865Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:38.169487 env[1216]: time="2024-07-02T07:57:38.169432647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:38.171990 env[1216]: time="2024-07-02T07:57:38.171935400Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:38.172822 env[1216]: time="2024-07-02T07:57:38.172751729Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 07:57:38.176948 env[1216]: time="2024-07-02T07:57:38.176898534Z" level=info msg="CreateContainer within sandbox \"ea01b77dff1d6139210554fcfe703abb3ef85b0534c7555e6cef7a50147a89fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:57:38.197820 env[1216]: time="2024-07-02T07:57:38.197719904Z" level=info msg="CreateContainer within sandbox \"ea01b77dff1d6139210554fcfe703abb3ef85b0534c7555e6cef7a50147a89fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7dfdfa887b48a794d50fd5c4efd645313b1b7d56750bf561cc3e1ff7556d921a\"" Jul 2 07:57:38.198866 env[1216]: time="2024-07-02T07:57:38.198809513Z" level=info msg="StartContainer for \"7dfdfa887b48a794d50fd5c4efd645313b1b7d56750bf561cc3e1ff7556d921a\"" Jul 2 07:57:38.225209 systemd[1]: Started cri-containerd-7dfdfa887b48a794d50fd5c4efd645313b1b7d56750bf561cc3e1ff7556d921a.scope. Jul 2 07:57:38.278840 env[1216]: time="2024-07-02T07:57:38.277981833Z" level=info msg="StartContainer for \"7dfdfa887b48a794d50fd5c4efd645313b1b7d56750bf561cc3e1ff7556d921a\" returns successfully" Jul 2 07:57:38.323491 kubelet[1513]: E0702 07:57:38.323428 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:38.626719 env[1216]: time="2024-07-02T07:57:38.626659692Z" level=info msg="CreateContainer within sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:57:38.643753 kubelet[1513]: I0702 07:57:38.642792 1513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d25mt" podStartSLOduration=2.897827414 podStartE2EDuration="15.642750589s" podCreationTimestamp="2024-07-02 07:57:23 +0000 UTC" firstStartedPulling="2024-07-02 07:57:25.429486892 +0000 UTC m=+3.087589616" lastFinishedPulling="2024-07-02 07:57:38.174410052 +0000 UTC m=+15.832512791" observedRunningTime="2024-07-02 07:57:38.624937301 +0000 UTC m=+16.283040053" watchObservedRunningTime="2024-07-02 07:57:38.642750589 +0000 UTC m=+16.300853337" Jul 2 07:57:38.661407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661638468.mount: Deactivated successfully. Jul 2 07:57:38.683062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862696333.mount: Deactivated successfully. Jul 2 07:57:38.690393 env[1216]: time="2024-07-02T07:57:38.690307363Z" level=info msg="CreateContainer within sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"30989e17830b8983eb1ece39982c5778ca6867c9edc2aa2e84c4a7069d348e27\"" Jul 2 07:57:38.691822 env[1216]: time="2024-07-02T07:57:38.691744519Z" level=info msg="StartContainer for \"30989e17830b8983eb1ece39982c5778ca6867c9edc2aa2e84c4a7069d348e27\"" Jul 2 07:57:38.719486 systemd[1]: Started cri-containerd-30989e17830b8983eb1ece39982c5778ca6867c9edc2aa2e84c4a7069d348e27.scope. Jul 2 07:57:38.789116 env[1216]: time="2024-07-02T07:57:38.786941138Z" level=info msg="StartContainer for \"30989e17830b8983eb1ece39982c5778ca6867c9edc2aa2e84c4a7069d348e27\" returns successfully" Jul 2 07:57:38.788886 systemd[1]: cri-containerd-30989e17830b8983eb1ece39982c5778ca6867c9edc2aa2e84c4a7069d348e27.scope: Deactivated successfully. Jul 2 07:57:38.846622 env[1216]: time="2024-07-02T07:57:38.846522609Z" level=info msg="shim disconnected" id=30989e17830b8983eb1ece39982c5778ca6867c9edc2aa2e84c4a7069d348e27 Jul 2 07:57:38.846622 env[1216]: time="2024-07-02T07:57:38.846596380Z" level=warning msg="cleaning up after shim disconnected" id=30989e17830b8983eb1ece39982c5778ca6867c9edc2aa2e84c4a7069d348e27 namespace=k8s.io Jul 2 07:57:38.846622 env[1216]: time="2024-07-02T07:57:38.846611847Z" level=info msg="cleaning up dead shim" Jul 2 07:57:38.858287 env[1216]: time="2024-07-02T07:57:38.858222166Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:57:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2041 runtime=io.containerd.runc.v2\n" Jul 2 07:57:39.324556 kubelet[1513]: E0702 07:57:39.324471 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:39.630730 env[1216]: time="2024-07-02T07:57:39.630333432Z" level=info msg="CreateContainer within sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:57:39.655259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264435031.mount: Deactivated successfully. Jul 2 07:57:39.658869 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 07:57:39.679916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695793370.mount: Deactivated successfully. Jul 2 07:57:39.686866 env[1216]: time="2024-07-02T07:57:39.686802530Z" level=info msg="CreateContainer within sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc\"" Jul 2 07:57:39.687776 env[1216]: time="2024-07-02T07:57:39.687701666Z" level=info msg="StartContainer for \"28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc\"" Jul 2 07:57:39.710850 systemd[1]: Started cri-containerd-28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc.scope. Jul 2 07:57:39.759029 env[1216]: time="2024-07-02T07:57:39.758937775Z" level=info msg="StartContainer for \"28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc\" returns successfully" Jul 2 07:57:39.961542 kubelet[1513]: I0702 07:57:39.961412 1513 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 07:57:40.313814 kernel: Initializing XFRM netlink socket Jul 2 07:57:40.325551 kubelet[1513]: E0702 07:57:40.325473 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:41.326329 kubelet[1513]: E0702 07:57:41.326258 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:41.986836 systemd-networkd[1024]: cilium_host: Link UP Jul 2 07:57:41.987058 systemd-networkd[1024]: cilium_net: Link UP Jul 2 07:57:41.987065 systemd-networkd[1024]: cilium_net: Gained carrier Jul 2 07:57:41.994821 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:57:41.987299 systemd-networkd[1024]: cilium_host: Gained carrier Jul 2 07:57:41.996721 systemd-networkd[1024]: cilium_host: Gained IPv6LL Jul 2 07:57:42.114415 kubelet[1513]: I0702 07:57:42.112805 1513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h7fgk" podStartSLOduration=10.206154745 podStartE2EDuration="19.112745011s" podCreationTimestamp="2024-07-02 07:57:23 +0000 UTC" firstStartedPulling="2024-07-02 07:57:25.41120488 +0000 UTC m=+3.069307604" lastFinishedPulling="2024-07-02 07:57:34.317795131 +0000 UTC m=+11.975897870" observedRunningTime="2024-07-02 07:57:40.649667331 +0000 UTC m=+18.307770080" watchObservedRunningTime="2024-07-02 07:57:42.112745011 +0000 UTC m=+19.770847750" Jul 2 07:57:42.114415 kubelet[1513]: I0702 07:57:42.113364 1513 topology_manager.go:215] "Topology Admit Handler" podUID="3fad9d75-225e-4d69-bef2-70b18c50db63" podNamespace="default" podName="nginx-deployment-85f456d6dd-cj5jd" Jul 2 07:57:42.124271 systemd[1]: Created slice kubepods-besteffort-pod3fad9d75_225e_4d69_bef2_70b18c50db63.slice. Jul 2 07:57:42.154660 systemd-networkd[1024]: cilium_vxlan: Link UP Jul 2 07:57:42.154679 systemd-networkd[1024]: cilium_vxlan: Gained carrier Jul 2 07:57:42.205388 kubelet[1513]: I0702 07:57:42.205305 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcs9c\" (UniqueName: \"kubernetes.io/projected/3fad9d75-225e-4d69-bef2-70b18c50db63-kube-api-access-bcs9c\") pod \"nginx-deployment-85f456d6dd-cj5jd\" (UID: \"3fad9d75-225e-4d69-bef2-70b18c50db63\") " pod="default/nginx-deployment-85f456d6dd-cj5jd" Jul 2 07:57:42.326750 kubelet[1513]: E0702 07:57:42.326700 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:42.429992 env[1216]: time="2024-07-02T07:57:42.429911911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cj5jd,Uid:3fad9d75-225e-4d69-bef2-70b18c50db63,Namespace:default,Attempt:0,}" Jul 2 07:57:42.461798 kernel: NET: Registered PF_ALG protocol family Jul 2 07:57:43.034004 systemd-networkd[1024]: cilium_net: Gained IPv6LL Jul 2 07:57:43.312902 kubelet[1513]: E0702 07:57:43.312848 1513 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:43.327455 kubelet[1513]: E0702 07:57:43.327410 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:43.332053 systemd-networkd[1024]: lxc_health: Link UP Jul 2 07:57:43.360904 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:57:43.361293 systemd-networkd[1024]: lxc_health: Gained carrier Jul 2 07:57:43.417091 systemd-networkd[1024]: cilium_vxlan: Gained IPv6LL Jul 2 07:57:44.005877 systemd-networkd[1024]: lxc0c8a153c597b: Link UP Jul 2 07:57:44.031715 kernel: eth0: renamed from tmpb0938 Jul 2 07:57:44.045804 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0c8a153c597b: link becomes ready Jul 2 07:57:44.046504 systemd-networkd[1024]: lxc0c8a153c597b: Gained carrier Jul 2 07:57:44.328203 kubelet[1513]: E0702 07:57:44.328156 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:44.889572 systemd-networkd[1024]: lxc_health: Gained IPv6LL Jul 2 07:57:45.145563 systemd-networkd[1024]: lxc0c8a153c597b: Gained IPv6LL Jul 2 07:57:45.329577 kubelet[1513]: E0702 07:57:45.329511 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:46.330238 kubelet[1513]: E0702 07:57:46.330169 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:47.331443 kubelet[1513]: E0702 07:57:47.331388 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:48.332838 kubelet[1513]: E0702 07:57:48.332783 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:48.727209 kubelet[1513]: I0702 07:57:48.727070 1513 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:57:48.823163 env[1216]: time="2024-07-02T07:57:48.823043523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:57:48.823163 env[1216]: time="2024-07-02T07:57:48.823091304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:57:48.823163 env[1216]: time="2024-07-02T07:57:48.823109846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:57:48.824051 env[1216]: time="2024-07-02T07:57:48.823998002Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b093863ca81f24806d30a89b7728799a0250c4d28df56d4aa503b4a9dc1e26f3 pid=2562 runtime=io.containerd.runc.v2 Jul 2 07:57:48.848328 systemd[1]: Started cri-containerd-b093863ca81f24806d30a89b7728799a0250c4d28df56d4aa503b4a9dc1e26f3.scope. Jul 2 07:57:48.861665 systemd[1]: run-containerd-runc-k8s.io-b093863ca81f24806d30a89b7728799a0250c4d28df56d4aa503b4a9dc1e26f3-runc.KIepNt.mount: Deactivated successfully. Jul 2 07:57:48.927408 env[1216]: time="2024-07-02T07:57:48.927332484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cj5jd,Uid:3fad9d75-225e-4d69-bef2-70b18c50db63,Namespace:default,Attempt:0,} returns sandbox id \"b093863ca81f24806d30a89b7728799a0250c4d28df56d4aa503b4a9dc1e26f3\"" Jul 2 07:57:48.930038 env[1216]: time="2024-07-02T07:57:48.929989975Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 07:57:49.333319 kubelet[1513]: E0702 07:57:49.333247 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:50.334435 kubelet[1513]: E0702 07:57:50.334379 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:51.334894 kubelet[1513]: E0702 07:57:51.334798 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:51.414439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3950467152.mount: Deactivated successfully. Jul 2 07:57:52.335072 kubelet[1513]: E0702 07:57:52.334991 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:53.156728 env[1216]: time="2024-07-02T07:57:53.156647967Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:53.159904 env[1216]: time="2024-07-02T07:57:53.159845035Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:53.162772 env[1216]: time="2024-07-02T07:57:53.162707420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:53.165196 env[1216]: time="2024-07-02T07:57:53.165149481Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:57:53.166157 env[1216]: time="2024-07-02T07:57:53.166099130Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 07:57:53.169949 env[1216]: time="2024-07-02T07:57:53.169889458Z" level=info msg="CreateContainer within sandbox \"b093863ca81f24806d30a89b7728799a0250c4d28df56d4aa503b4a9dc1e26f3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 2 07:57:53.194579 env[1216]: time="2024-07-02T07:57:53.194498434Z" level=info msg="CreateContainer within sandbox \"b093863ca81f24806d30a89b7728799a0250c4d28df56d4aa503b4a9dc1e26f3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"83570796b69d5efe0ae101e2a38ec7d72c7059ae5afc93c293e1144156694a3f\"" Jul 2 07:57:53.195492 env[1216]: time="2024-07-02T07:57:53.195427383Z" level=info msg="StartContainer for \"83570796b69d5efe0ae101e2a38ec7d72c7059ae5afc93c293e1144156694a3f\"" Jul 2 07:57:53.235495 systemd[1]: run-containerd-runc-k8s.io-83570796b69d5efe0ae101e2a38ec7d72c7059ae5afc93c293e1144156694a3f-runc.EwQLsi.mount: Deactivated successfully. Jul 2 07:57:53.240647 systemd[1]: Started cri-containerd-83570796b69d5efe0ae101e2a38ec7d72c7059ae5afc93c293e1144156694a3f.scope. Jul 2 07:57:53.280568 env[1216]: time="2024-07-02T07:57:53.279390663Z" level=info msg="StartContainer for \"83570796b69d5efe0ae101e2a38ec7d72c7059ae5afc93c293e1144156694a3f\" returns successfully" Jul 2 07:57:53.335564 kubelet[1513]: E0702 07:57:53.335466 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:53.674884 kubelet[1513]: I0702 07:57:53.674807 1513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-cj5jd" podStartSLOduration=7.436167358 podStartE2EDuration="11.67477828s" podCreationTimestamp="2024-07-02 07:57:42 +0000 UTC" firstStartedPulling="2024-07-02 07:57:48.929204353 +0000 UTC m=+26.587307082" lastFinishedPulling="2024-07-02 07:57:53.167815271 +0000 UTC m=+30.825918004" observedRunningTime="2024-07-02 07:57:53.674424905 +0000 UTC m=+31.332527653" watchObservedRunningTime="2024-07-02 07:57:53.67477828 +0000 UTC m=+31.332881018" Jul 2 07:57:54.208519 update_engine[1204]: I0702 07:57:54.208416 1204 update_attempter.cc:509] Updating boot flags... Jul 2 07:57:54.336533 kubelet[1513]: E0702 07:57:54.336453 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:55.337302 kubelet[1513]: E0702 07:57:55.337202 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:56.337908 kubelet[1513]: E0702 07:57:56.337834 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:56.897257 kubelet[1513]: I0702 07:57:56.897204 1513 topology_manager.go:215] "Topology Admit Handler" podUID="d4df5ceb-dadf-486a-97bb-73ad868ce775" podNamespace="default" podName="nfs-server-provisioner-0" Jul 2 07:57:56.904039 systemd[1]: Created slice kubepods-besteffort-podd4df5ceb_dadf_486a_97bb_73ad868ce775.slice. Jul 2 07:57:57.005810 kubelet[1513]: I0702 07:57:57.005732 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqxrc\" (UniqueName: \"kubernetes.io/projected/d4df5ceb-dadf-486a-97bb-73ad868ce775-kube-api-access-vqxrc\") pod \"nfs-server-provisioner-0\" (UID: \"d4df5ceb-dadf-486a-97bb-73ad868ce775\") " pod="default/nfs-server-provisioner-0" Jul 2 07:57:57.005810 kubelet[1513]: I0702 07:57:57.005827 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d4df5ceb-dadf-486a-97bb-73ad868ce775-data\") pod \"nfs-server-provisioner-0\" (UID: \"d4df5ceb-dadf-486a-97bb-73ad868ce775\") " pod="default/nfs-server-provisioner-0" Jul 2 07:57:57.209903 env[1216]: time="2024-07-02T07:57:57.209717376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d4df5ceb-dadf-486a-97bb-73ad868ce775,Namespace:default,Attempt:0,}" Jul 2 07:57:57.269735 systemd-networkd[1024]: lxc1195ba9ddc72: Link UP Jul 2 07:57:57.280814 kernel: eth0: renamed from tmp7d5ec Jul 2 07:57:57.305479 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:57:57.305627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1195ba9ddc72: link becomes ready Jul 2 07:57:57.307249 systemd-networkd[1024]: lxc1195ba9ddc72: Gained carrier Jul 2 07:57:57.338613 kubelet[1513]: E0702 07:57:57.338558 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:57.572717 env[1216]: time="2024-07-02T07:57:57.572612928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:57:57.572717 env[1216]: time="2024-07-02T07:57:57.572668818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:57:57.573049 env[1216]: time="2024-07-02T07:57:57.572687721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:57:57.573279 env[1216]: time="2024-07-02T07:57:57.573199656Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d5ec9ed23fb9d4ebcee2ffa3b1342842725867bf001ff33d45d34fcced6b534 pid=2701 runtime=io.containerd.runc.v2 Jul 2 07:57:57.599354 systemd[1]: Started cri-containerd-7d5ec9ed23fb9d4ebcee2ffa3b1342842725867bf001ff33d45d34fcced6b534.scope. Jul 2 07:57:57.671800 env[1216]: time="2024-07-02T07:57:57.671563705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d4df5ceb-dadf-486a-97bb-73ad868ce775,Namespace:default,Attempt:0,} returns sandbox id \"7d5ec9ed23fb9d4ebcee2ffa3b1342842725867bf001ff33d45d34fcced6b534\"" Jul 2 07:57:57.674088 env[1216]: time="2024-07-02T07:57:57.674021285Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 2 07:57:58.125252 systemd[1]: run-containerd-runc-k8s.io-7d5ec9ed23fb9d4ebcee2ffa3b1342842725867bf001ff33d45d34fcced6b534-runc.jJWPwq.mount: Deactivated successfully. Jul 2 07:57:58.339334 kubelet[1513]: E0702 07:57:58.339254 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:57:58.969179 systemd-networkd[1024]: lxc1195ba9ddc72: Gained IPv6LL Jul 2 07:57:59.340147 kubelet[1513]: E0702 07:57:59.340064 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:00.309477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515886178.mount: Deactivated successfully. Jul 2 07:58:00.340268 kubelet[1513]: E0702 07:58:00.340220 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:01.340636 kubelet[1513]: E0702 07:58:01.340577 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:02.341805 kubelet[1513]: E0702 07:58:02.341734 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:02.851229 env[1216]: time="2024-07-02T07:58:02.851156146Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:58:02.855963 env[1216]: time="2024-07-02T07:58:02.855879677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:58:02.862028 env[1216]: time="2024-07-02T07:58:02.861955568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:58:02.867705 env[1216]: time="2024-07-02T07:58:02.867633275Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:58:02.869548 env[1216]: time="2024-07-02T07:58:02.869466555Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 2 07:58:02.876181 env[1216]: time="2024-07-02T07:58:02.876041648Z" level=info msg="CreateContainer within sandbox \"7d5ec9ed23fb9d4ebcee2ffa3b1342842725867bf001ff33d45d34fcced6b534\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 2 07:58:02.907648 env[1216]: time="2024-07-02T07:58:02.907562195Z" level=info msg="CreateContainer within sandbox \"7d5ec9ed23fb9d4ebcee2ffa3b1342842725867bf001ff33d45d34fcced6b534\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f9a1c43e10972f180ec5349cab86ba1a3fd3b7d16e73d03ae18d2de9bfbbf994\"" Jul 2 07:58:02.909071 env[1216]: time="2024-07-02T07:58:02.909000215Z" level=info msg="StartContainer for \"f9a1c43e10972f180ec5349cab86ba1a3fd3b7d16e73d03ae18d2de9bfbbf994\"" Jul 2 07:58:02.953280 systemd[1]: run-containerd-runc-k8s.io-f9a1c43e10972f180ec5349cab86ba1a3fd3b7d16e73d03ae18d2de9bfbbf994-runc.4cZZya.mount: Deactivated successfully. Jul 2 07:58:02.957258 systemd[1]: Started cri-containerd-f9a1c43e10972f180ec5349cab86ba1a3fd3b7d16e73d03ae18d2de9bfbbf994.scope. Jul 2 07:58:03.007263 env[1216]: time="2024-07-02T07:58:03.007188831Z" level=info msg="StartContainer for \"f9a1c43e10972f180ec5349cab86ba1a3fd3b7d16e73d03ae18d2de9bfbbf994\" returns successfully" Jul 2 07:58:03.312832 kubelet[1513]: E0702 07:58:03.312748 1513 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:03.342308 kubelet[1513]: E0702 07:58:03.342241 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:03.704640 kubelet[1513]: I0702 07:58:03.704412 1513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.506277378 podStartE2EDuration="7.704387049s" podCreationTimestamp="2024-07-02 07:57:56 +0000 UTC" firstStartedPulling="2024-07-02 07:57:57.673576669 +0000 UTC m=+35.331679395" lastFinishedPulling="2024-07-02 07:58:02.87168634 +0000 UTC m=+40.529789066" observedRunningTime="2024-07-02 07:58:03.704379639 +0000 UTC m=+41.362482384" watchObservedRunningTime="2024-07-02 07:58:03.704387049 +0000 UTC m=+41.362489800" Jul 2 07:58:04.343477 kubelet[1513]: E0702 07:58:04.343351 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:05.344425 kubelet[1513]: E0702 07:58:05.344353 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:06.345077 kubelet[1513]: E0702 07:58:06.344995 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:07.346138 kubelet[1513]: E0702 07:58:07.346080 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:08.346595 kubelet[1513]: E0702 07:58:08.346524 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:09.346780 kubelet[1513]: E0702 07:58:09.346692 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:10.347904 kubelet[1513]: E0702 07:58:10.347777 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:11.348902 kubelet[1513]: E0702 07:58:11.348823 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:12.349506 kubelet[1513]: E0702 07:58:12.349435 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:13.043244 kubelet[1513]: I0702 07:58:13.043189 1513 topology_manager.go:215] "Topology Admit Handler" podUID="d5fdef3e-fb11-4e9d-933c-1963dff2226c" podNamespace="default" podName="test-pod-1" Jul 2 07:58:13.050867 systemd[1]: Created slice kubepods-besteffort-podd5fdef3e_fb11_4e9d_933c_1963dff2226c.slice. Jul 2 07:58:13.203751 kubelet[1513]: I0702 07:58:13.203675 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-37a8eeaf-f32d-41c8-a511-097c5b531f02\" (UniqueName: \"kubernetes.io/nfs/d5fdef3e-fb11-4e9d-933c-1963dff2226c-pvc-37a8eeaf-f32d-41c8-a511-097c5b531f02\") pod \"test-pod-1\" (UID: \"d5fdef3e-fb11-4e9d-933c-1963dff2226c\") " pod="default/test-pod-1" Jul 2 07:58:13.204187 kubelet[1513]: I0702 07:58:13.204131 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5ncb\" (UniqueName: \"kubernetes.io/projected/d5fdef3e-fb11-4e9d-933c-1963dff2226c-kube-api-access-j5ncb\") pod \"test-pod-1\" (UID: \"d5fdef3e-fb11-4e9d-933c-1963dff2226c\") " pod="default/test-pod-1" Jul 2 07:58:13.354937 kubelet[1513]: E0702 07:58:13.349745 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:13.361795 kernel: FS-Cache: Loaded Jul 2 07:58:13.430987 kernel: RPC: Registered named UNIX socket transport module. Jul 2 07:58:13.431200 kernel: RPC: Registered udp transport module. Jul 2 07:58:13.431246 kernel: RPC: Registered tcp transport module. Jul 2 07:58:13.435712 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 2 07:58:13.529812 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 2 07:58:13.785390 kernel: NFS: Registering the id_resolver key type Jul 2 07:58:13.785583 kernel: Key type id_resolver registered Jul 2 07:58:13.785628 kernel: Key type id_legacy registered Jul 2 07:58:13.844532 nfsidmap[2821]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Jul 2 07:58:13.858300 nfsidmap[2822]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Jul 2 07:58:13.956744 env[1216]: time="2024-07-02T07:58:13.956668662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d5fdef3e-fb11-4e9d-933c-1963dff2226c,Namespace:default,Attempt:0,}" Jul 2 07:58:14.350688 kubelet[1513]: E0702 07:58:14.350605 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:14.417037 systemd-networkd[1024]: lxc2b32c8357381: Link UP Jul 2 07:58:14.430493 kernel: eth0: renamed from tmpa337f Jul 2 07:58:14.446595 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:58:14.458249 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2b32c8357381: link becomes ready Jul 2 07:58:14.462289 systemd-networkd[1024]: lxc2b32c8357381: Gained carrier Jul 2 07:58:14.792974 env[1216]: time="2024-07-02T07:58:14.792881636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:58:14.793230 env[1216]: time="2024-07-02T07:58:14.792938783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:58:14.793230 env[1216]: time="2024-07-02T07:58:14.792958904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:58:14.793438 env[1216]: time="2024-07-02T07:58:14.793253918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a337ff8f4dfd75ff28d9c3441f00d5463a47dfad0f8ad7b3fe84ce027bb8ea5f pid=2852 runtime=io.containerd.runc.v2 Jul 2 07:58:14.827080 systemd[1]: run-containerd-runc-k8s.io-a337ff8f4dfd75ff28d9c3441f00d5463a47dfad0f8ad7b3fe84ce027bb8ea5f-runc.ZspsiK.mount: Deactivated successfully. Jul 2 07:58:14.833939 systemd[1]: Started cri-containerd-a337ff8f4dfd75ff28d9c3441f00d5463a47dfad0f8ad7b3fe84ce027bb8ea5f.scope. Jul 2 07:58:14.897635 env[1216]: time="2024-07-02T07:58:14.897093240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d5fdef3e-fb11-4e9d-933c-1963dff2226c,Namespace:default,Attempt:0,} returns sandbox id \"a337ff8f4dfd75ff28d9c3441f00d5463a47dfad0f8ad7b3fe84ce027bb8ea5f\"" Jul 2 07:58:14.900242 env[1216]: time="2024-07-02T07:58:14.900179185Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 07:58:15.093301 env[1216]: time="2024-07-02T07:58:15.092515306Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:58:15.096613 env[1216]: time="2024-07-02T07:58:15.096558187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:58:15.099170 env[1216]: time="2024-07-02T07:58:15.099117594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:58:15.102225 env[1216]: time="2024-07-02T07:58:15.102173107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:58:15.103150 env[1216]: time="2024-07-02T07:58:15.103097196Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 07:58:15.106829 env[1216]: time="2024-07-02T07:58:15.106776733Z" level=info msg="CreateContainer within sandbox \"a337ff8f4dfd75ff28d9c3441f00d5463a47dfad0f8ad7b3fe84ce027bb8ea5f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 2 07:58:15.131142 env[1216]: time="2024-07-02T07:58:15.131072671Z" level=info msg="CreateContainer within sandbox \"a337ff8f4dfd75ff28d9c3441f00d5463a47dfad0f8ad7b3fe84ce027bb8ea5f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"74f026b347fcb4bcab7dc9de8b7756a7e889a9a98140af1e00b2e695a3d1790e\"" Jul 2 07:58:15.131986 env[1216]: time="2024-07-02T07:58:15.131947662Z" level=info msg="StartContainer for \"74f026b347fcb4bcab7dc9de8b7756a7e889a9a98140af1e00b2e695a3d1790e\"" Jul 2 07:58:15.156385 systemd[1]: Started cri-containerd-74f026b347fcb4bcab7dc9de8b7756a7e889a9a98140af1e00b2e695a3d1790e.scope. Jul 2 07:58:15.203209 env[1216]: time="2024-07-02T07:58:15.203152686Z" level=info msg="StartContainer for \"74f026b347fcb4bcab7dc9de8b7756a7e889a9a98140af1e00b2e695a3d1790e\" returns successfully" Jul 2 07:58:15.350965 kubelet[1513]: E0702 07:58:15.350820 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:15.385670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1574454787.mount: Deactivated successfully. Jul 2 07:58:15.733747 kubelet[1513]: I0702 07:58:15.733581 1513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.527942354 podStartE2EDuration="18.733559537s" podCreationTimestamp="2024-07-02 07:57:57 +0000 UTC" firstStartedPulling="2024-07-02 07:58:14.899218884 +0000 UTC m=+52.557321623" lastFinishedPulling="2024-07-02 07:58:15.104836073 +0000 UTC m=+52.762938806" observedRunningTime="2024-07-02 07:58:15.732228515 +0000 UTC m=+53.390331263" watchObservedRunningTime="2024-07-02 07:58:15.733559537 +0000 UTC m=+53.391662284" Jul 2 07:58:15.865055 systemd-networkd[1024]: lxc2b32c8357381: Gained IPv6LL Jul 2 07:58:16.351754 kubelet[1513]: E0702 07:58:16.351683 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:17.352660 kubelet[1513]: E0702 07:58:17.352590 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:18.353085 kubelet[1513]: E0702 07:58:18.353017 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:19.354295 kubelet[1513]: E0702 07:58:19.354223 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:20.354468 kubelet[1513]: E0702 07:58:20.354402 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:21.354889 kubelet[1513]: E0702 07:58:21.354818 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:22.356010 kubelet[1513]: E0702 07:58:22.355844 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:22.995195 systemd[1]: run-containerd-runc-k8s.io-28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc-runc.ON7j2g.mount: Deactivated successfully. Jul 2 07:58:23.020352 env[1216]: time="2024-07-02T07:58:23.020265610Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:58:23.028550 env[1216]: time="2024-07-02T07:58:23.028494666Z" level=info msg="StopContainer for \"28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc\" with timeout 2 (s)" Jul 2 07:58:23.028932 env[1216]: time="2024-07-02T07:58:23.028895493Z" level=info msg="Stop container \"28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc\" with signal terminated" Jul 2 07:58:23.039345 systemd-networkd[1024]: lxc_health: Link DOWN Jul 2 07:58:23.039359 systemd-networkd[1024]: lxc_health: Lost carrier Jul 2 07:58:23.065413 systemd[1]: cri-containerd-28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc.scope: Deactivated successfully. Jul 2 07:58:23.065786 systemd[1]: cri-containerd-28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc.scope: Consumed 9.337s CPU time. Jul 2 07:58:23.093325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc-rootfs.mount: Deactivated successfully. Jul 2 07:58:23.117451 env[1216]: time="2024-07-02T07:58:23.117383211Z" level=info msg="shim disconnected" id=28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc Jul 2 07:58:23.117451 env[1216]: time="2024-07-02T07:58:23.117452679Z" level=warning msg="cleaning up after shim disconnected" id=28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc namespace=k8s.io Jul 2 07:58:23.117864 env[1216]: time="2024-07-02T07:58:23.117468150Z" level=info msg="cleaning up dead shim" Jul 2 07:58:23.131600 env[1216]: time="2024-07-02T07:58:23.131530019Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2984 runtime=io.containerd.runc.v2\n" Jul 2 07:58:23.135209 env[1216]: time="2024-07-02T07:58:23.135150623Z" level=info msg="StopContainer for \"28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc\" returns successfully" Jul 2 07:58:23.136176 env[1216]: time="2024-07-02T07:58:23.136128652Z" level=info msg="StopPodSandbox for \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\"" Jul 2 07:58:23.136326 env[1216]: time="2024-07-02T07:58:23.136218685Z" level=info msg="Container to stop \"53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:23.136326 env[1216]: time="2024-07-02T07:58:23.136243936Z" level=info msg="Container to stop \"e652525ccb3e37adfcedb7699d2778461d2450d7119cb670264226c8eba7d7e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:23.136326 env[1216]: time="2024-07-02T07:58:23.136262066Z" level=info msg="Container to stop \"ded82c49694e96246f559322a5e021aeec378865c7a742fc8373fc345311666d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:23.136326 env[1216]: time="2024-07-02T07:58:23.136282980Z" level=info msg="Container to stop \"30989e17830b8983eb1ece39982c5778ca6867c9edc2aa2e84c4a7069d348e27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:23.136326 env[1216]: time="2024-07-02T07:58:23.136301580Z" level=info msg="Container to stop \"28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:23.139575 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4-shm.mount: Deactivated successfully. Jul 2 07:58:23.149993 systemd[1]: cri-containerd-cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4.scope: Deactivated successfully. Jul 2 07:58:23.184218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4-rootfs.mount: Deactivated successfully. Jul 2 07:58:23.192729 env[1216]: time="2024-07-02T07:58:23.192655031Z" level=info msg="shim disconnected" id=cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4 Jul 2 07:58:23.192729 env[1216]: time="2024-07-02T07:58:23.192717829Z" level=warning msg="cleaning up after shim disconnected" id=cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4 namespace=k8s.io Jul 2 07:58:23.192729 env[1216]: time="2024-07-02T07:58:23.192733572Z" level=info msg="cleaning up dead shim" Jul 2 07:58:23.207417 env[1216]: time="2024-07-02T07:58:23.207345193Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3016 runtime=io.containerd.runc.v2\n" Jul 2 07:58:23.207927 env[1216]: time="2024-07-02T07:58:23.207848855Z" level=info msg="TearDown network for sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" successfully" Jul 2 07:58:23.207927 env[1216]: time="2024-07-02T07:58:23.207904640Z" level=info msg="StopPodSandbox for \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" returns successfully" Jul 2 07:58:23.312509 kubelet[1513]: E0702 07:58:23.312462 1513 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:23.356903 kubelet[1513]: E0702 07:58:23.356845 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:23.373420 kubelet[1513]: I0702 07:58:23.373381 1513 scope.go:117] "RemoveContainer" containerID="e652525ccb3e37adfcedb7699d2778461d2450d7119cb670264226c8eba7d7e0" Jul 2 07:58:23.373642 kubelet[1513]: I0702 07:58:23.373626 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-host-proc-sys-kernel\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.373733 kubelet[1513]: I0702 07:58:23.373659 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-bpf-maps\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.373733 kubelet[1513]: I0702 07:58:23.373686 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-xtables-lock\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.373733 kubelet[1513]: I0702 07:58:23.373720 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd2k9\" (UniqueName: \"kubernetes.io/projected/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-kube-api-access-kd2k9\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.373992 kubelet[1513]: I0702 07:58:23.373746 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-cgroup\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.373992 kubelet[1513]: I0702 07:58:23.373791 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-host-proc-sys-net\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.373992 kubelet[1513]: I0702 07:58:23.373827 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cni-path\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.373992 kubelet[1513]: I0702 07:58:23.373853 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-hostproc\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.373992 kubelet[1513]: I0702 07:58:23.373879 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-etc-cni-netd\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.373992 kubelet[1513]: I0702 07:58:23.373913 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-run\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.374355 kubelet[1513]: I0702 07:58:23.373940 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-hubble-tls\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.374355 kubelet[1513]: I0702 07:58:23.373961 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-lib-modules\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.374355 kubelet[1513]: I0702 07:58:23.373988 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-config-path\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.374355 kubelet[1513]: I0702 07:58:23.374018 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-clustermesh-secrets\") pod \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\" (UID: \"5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc\") " Jul 2 07:58:23.374584 kubelet[1513]: I0702 07:58:23.374501 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cni-path" (OuterVolumeSpecName: "cni-path") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.374584 kubelet[1513]: I0702 07:58:23.374573 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.374708 kubelet[1513]: I0702 07:58:23.374603 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.374708 kubelet[1513]: I0702 07:58:23.374629 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.375383 kubelet[1513]: I0702 07:58:23.375343 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-hostproc" (OuterVolumeSpecName: "hostproc") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.375587 kubelet[1513]: I0702 07:58:23.375563 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.375903 kubelet[1513]: I0702 07:58:23.375848 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.376777 kubelet[1513]: I0702 07:58:23.376363 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.377287 kubelet[1513]: I0702 07:58:23.376395 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.381052 kubelet[1513]: I0702 07:58:23.376904 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.382681 env[1216]: time="2024-07-02T07:58:23.382201508Z" level=info msg="RemoveContainer for \"e652525ccb3e37adfcedb7699d2778461d2450d7119cb670264226c8eba7d7e0\"" Jul 2 07:58:23.387887 env[1216]: time="2024-07-02T07:58:23.387610836Z" level=info msg="RemoveContainer for \"e652525ccb3e37adfcedb7699d2778461d2450d7119cb670264226c8eba7d7e0\" returns successfully" Jul 2 07:58:23.389451 kubelet[1513]: I0702 07:58:23.389402 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:58:23.389963 kubelet[1513]: I0702 07:58:23.389774 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:23.390127 kubelet[1513]: I0702 07:58:23.389781 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:58:23.390251 kubelet[1513]: I0702 07:58:23.389873 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-kube-api-access-kd2k9" (OuterVolumeSpecName: "kube-api-access-kd2k9") pod "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" (UID: "5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc"). InnerVolumeSpecName "kube-api-access-kd2k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:23.390365 kubelet[1513]: I0702 07:58:23.390132 1513 scope.go:117] "RemoveContainer" containerID="ded82c49694e96246f559322a5e021aeec378865c7a742fc8373fc345311666d" Jul 2 07:58:23.392227 env[1216]: time="2024-07-02T07:58:23.392169071Z" level=info msg="RemoveContainer for \"ded82c49694e96246f559322a5e021aeec378865c7a742fc8373fc345311666d\"" Jul 2 07:58:23.396645 env[1216]: time="2024-07-02T07:58:23.396587827Z" level=info msg="RemoveContainer for \"ded82c49694e96246f559322a5e021aeec378865c7a742fc8373fc345311666d\" returns successfully" Jul 2 07:58:23.397012 kubelet[1513]: I0702 07:58:23.396987 1513 scope.go:117] "RemoveContainer" containerID="30989e17830b8983eb1ece39982c5778ca6867c9edc2aa2e84c4a7069d348e27" Jul 2 07:58:23.398525 env[1216]: time="2024-07-02T07:58:23.398481642Z" level=info msg="RemoveContainer for \"30989e17830b8983eb1ece39982c5778ca6867c9edc2aa2e84c4a7069d348e27\"" Jul 2 07:58:23.402840 env[1216]: time="2024-07-02T07:58:23.402784258Z" level=info msg="RemoveContainer for \"30989e17830b8983eb1ece39982c5778ca6867c9edc2aa2e84c4a7069d348e27\" returns successfully" Jul 2 07:58:23.403352 kubelet[1513]: I0702 07:58:23.403249 1513 scope.go:117] "RemoveContainer" containerID="28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc" Jul 2 07:58:23.406527 env[1216]: time="2024-07-02T07:58:23.405965338Z" level=info msg="RemoveContainer for \"28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc\"" Jul 2 07:58:23.410866 env[1216]: time="2024-07-02T07:58:23.410793668Z" level=info msg="RemoveContainer for \"28f1c7b6227c08321a3b9bba07ae51c84bbad10413c0e77ba8446c82cec558bc\" returns successfully" Jul 2 07:58:23.411195 kubelet[1513]: I0702 07:58:23.411169 1513 scope.go:117] "RemoveContainer" containerID="53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed" Jul 2 07:58:23.412951 env[1216]: time="2024-07-02T07:58:23.412897993Z" level=info msg="RemoveContainer for \"53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed\"" Jul 2 07:58:23.416993 env[1216]: time="2024-07-02T07:58:23.416943459Z" level=info msg="RemoveContainer for \"53f7977887e983703f6eb788a2fc685a18527cac055121c399a79930658fceed\" returns successfully" Jul 2 07:58:23.418805 env[1216]: time="2024-07-02T07:58:23.418736122Z" level=info msg="StopPodSandbox for \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\"" Jul 2 07:58:23.419054 env[1216]: time="2024-07-02T07:58:23.418880255Z" level=info msg="TearDown network for sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" successfully" Jul 2 07:58:23.419054 env[1216]: time="2024-07-02T07:58:23.418932946Z" level=info msg="StopPodSandbox for \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" returns successfully" Jul 2 07:58:23.420789 env[1216]: time="2024-07-02T07:58:23.419467096Z" level=info msg="RemovePodSandbox for \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\"" Jul 2 07:58:23.420789 env[1216]: time="2024-07-02T07:58:23.419510171Z" level=info msg="Forcibly stopping sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\"" Jul 2 07:58:23.420789 env[1216]: time="2024-07-02T07:58:23.419602862Z" level=info msg="TearDown network for sandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" successfully" Jul 2 07:58:23.423961 env[1216]: time="2024-07-02T07:58:23.423909004Z" level=info msg="RemovePodSandbox \"cd5756457e14a458fa3609317bd7184a09b3bf044a7db43d9b55e70e0ecccec4\" returns successfully" Jul 2 07:58:23.468406 kubelet[1513]: E0702 07:58:23.468347 1513 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:58:23.474614 kubelet[1513]: I0702 07:58:23.474560 1513 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-cgroup\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.474614 kubelet[1513]: I0702 07:58:23.474608 1513 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-host-proc-sys-net\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.474614 kubelet[1513]: I0702 07:58:23.474627 1513 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cni-path\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.474614 kubelet[1513]: I0702 07:58:23.474643 1513 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-hostproc\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.475070 kubelet[1513]: I0702 07:58:23.474657 1513 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-etc-cni-netd\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.475070 kubelet[1513]: I0702 07:58:23.474671 1513 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-run\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.475070 kubelet[1513]: I0702 07:58:23.474683 1513 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-hubble-tls\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.475070 kubelet[1513]: I0702 07:58:23.474697 1513 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-lib-modules\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.475070 kubelet[1513]: I0702 07:58:23.474710 1513 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-cilium-config-path\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.475070 kubelet[1513]: I0702 07:58:23.474721 1513 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-clustermesh-secrets\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.475070 kubelet[1513]: I0702 07:58:23.474734 1513 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-host-proc-sys-kernel\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.475070 kubelet[1513]: I0702 07:58:23.474745 1513 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-bpf-maps\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.475334 kubelet[1513]: I0702 07:58:23.474772 1513 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-xtables-lock\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.475334 kubelet[1513]: I0702 07:58:23.474787 1513 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kd2k9\" (UniqueName: \"kubernetes.io/projected/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc-kube-api-access-kd2k9\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:23.574182 systemd[1]: Removed slice kubepods-burstable-pod5c02665f_ceb0_4d9e_bef2_d37a9af1d7fc.slice. Jul 2 07:58:23.574371 systemd[1]: kubepods-burstable-pod5c02665f_ceb0_4d9e_bef2_d37a9af1d7fc.slice: Consumed 9.517s CPU time. Jul 2 07:58:23.988576 systemd[1]: var-lib-kubelet-pods-5c02665f\x2dceb0\x2d4d9e\x2dbef2\x2dd37a9af1d7fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkd2k9.mount: Deactivated successfully. Jul 2 07:58:23.988721 systemd[1]: var-lib-kubelet-pods-5c02665f\x2dceb0\x2d4d9e\x2dbef2\x2dd37a9af1d7fc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:58:23.988842 systemd[1]: var-lib-kubelet-pods-5c02665f\x2dceb0\x2d4d9e\x2dbef2\x2dd37a9af1d7fc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:58:24.357623 kubelet[1513]: E0702 07:58:24.357561 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:24.947982 kubelet[1513]: I0702 07:58:24.947895 1513 setters.go:580] "Node became not ready" node="10.128.0.79" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T07:58:24Z","lastTransitionTime":"2024-07-02T07:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 07:58:25.358543 kubelet[1513]: E0702 07:58:25.358466 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:25.563234 kubelet[1513]: I0702 07:58:25.563167 1513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" path="/var/lib/kubelet/pods/5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc/volumes" Jul 2 07:58:26.247494 kubelet[1513]: I0702 07:58:26.247420 1513 topology_manager.go:215] "Topology Admit Handler" podUID="81401a4e-14f5-426e-a3a2-a12754ef8129" podNamespace="kube-system" podName="cilium-operator-599987898-nsjck" Jul 2 07:58:26.247729 kubelet[1513]: E0702 07:58:26.247507 1513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" containerName="mount-cgroup" Jul 2 07:58:26.247729 kubelet[1513]: E0702 07:58:26.247525 1513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" containerName="apply-sysctl-overwrites" Jul 2 07:58:26.247729 kubelet[1513]: E0702 07:58:26.247535 1513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" containerName="mount-bpf-fs" Jul 2 07:58:26.247729 kubelet[1513]: E0702 07:58:26.247544 1513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" containerName="clean-cilium-state" Jul 2 07:58:26.247729 kubelet[1513]: E0702 07:58:26.247555 1513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" containerName="cilium-agent" Jul 2 07:58:26.247729 kubelet[1513]: I0702 07:58:26.247587 1513 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c02665f-ceb0-4d9e-bef2-d37a9af1d7fc" containerName="cilium-agent" Jul 2 07:58:26.254737 systemd[1]: Created slice kubepods-besteffort-pod81401a4e_14f5_426e_a3a2_a12754ef8129.slice. Jul 2 07:58:26.281036 kubelet[1513]: I0702 07:58:26.280973 1513 topology_manager.go:215] "Topology Admit Handler" podUID="ccc56079-f459-4c66-9e8a-945a3ce0b6f8" podNamespace="kube-system" podName="cilium-fdlgw" Jul 2 07:58:26.288838 systemd[1]: Created slice kubepods-burstable-podccc56079_f459_4c66_9e8a_945a3ce0b6f8.slice. Jul 2 07:58:26.359721 kubelet[1513]: E0702 07:58:26.359646 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:26.392030 kubelet[1513]: I0702 07:58:26.391967 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81401a4e-14f5-426e-a3a2-a12754ef8129-cilium-config-path\") pod \"cilium-operator-599987898-nsjck\" (UID: \"81401a4e-14f5-426e-a3a2-a12754ef8129\") " pod="kube-system/cilium-operator-599987898-nsjck" Jul 2 07:58:26.392279 kubelet[1513]: I0702 07:58:26.392100 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrgxs\" (UniqueName: \"kubernetes.io/projected/81401a4e-14f5-426e-a3a2-a12754ef8129-kube-api-access-mrgxs\") pod \"cilium-operator-599987898-nsjck\" (UID: \"81401a4e-14f5-426e-a3a2-a12754ef8129\") " pod="kube-system/cilium-operator-599987898-nsjck" Jul 2 07:58:26.392279 kubelet[1513]: I0702 07:58:26.392178 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-run\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.392279 kubelet[1513]: I0702 07:58:26.392207 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-cgroup\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.392279 kubelet[1513]: I0702 07:58:26.392266 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cni-path\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.392558 kubelet[1513]: I0702 07:58:26.392294 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-ipsec-secrets\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.392558 kubelet[1513]: I0702 07:58:26.392374 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rkqw\" (UniqueName: \"kubernetes.io/projected/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-kube-api-access-9rkqw\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.392558 kubelet[1513]: I0702 07:58:26.392442 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-config-path\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.392558 kubelet[1513]: I0702 07:58:26.392498 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-host-proc-sys-net\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.392558 kubelet[1513]: I0702 07:58:26.392526 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-clustermesh-secrets\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.392890 kubelet[1513]: I0702 07:58:26.392589 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-host-proc-sys-kernel\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.392890 kubelet[1513]: I0702 07:58:26.392683 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-hostproc\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.392890 kubelet[1513]: I0702 07:58:26.392731 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-etc-cni-netd\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.392890 kubelet[1513]: I0702 07:58:26.392825 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-bpf-maps\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.393111 kubelet[1513]: I0702 07:58:26.392853 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-lib-modules\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.393111 kubelet[1513]: I0702 07:58:26.392976 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-xtables-lock\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.393111 kubelet[1513]: I0702 07:58:26.393006 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-hubble-tls\") pod \"cilium-fdlgw\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " pod="kube-system/cilium-fdlgw" Jul 2 07:58:26.597913 env[1216]: time="2024-07-02T07:58:26.597851976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fdlgw,Uid:ccc56079-f459-4c66-9e8a-945a3ce0b6f8,Namespace:kube-system,Attempt:0,}" Jul 2 07:58:26.624620 env[1216]: time="2024-07-02T07:58:26.624291416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:58:26.624620 env[1216]: time="2024-07-02T07:58:26.624362249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:58:26.624620 env[1216]: time="2024-07-02T07:58:26.624382953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:58:26.624982 env[1216]: time="2024-07-02T07:58:26.624742580Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e pid=3054 runtime=io.containerd.runc.v2 Jul 2 07:58:26.644337 systemd[1]: Started cri-containerd-4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e.scope. Jul 2 07:58:26.686498 env[1216]: time="2024-07-02T07:58:26.686441310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fdlgw,Uid:ccc56079-f459-4c66-9e8a-945a3ce0b6f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e\"" Jul 2 07:58:26.690587 env[1216]: time="2024-07-02T07:58:26.690531196Z" level=info msg="CreateContainer within sandbox \"4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:58:26.711116 env[1216]: time="2024-07-02T07:58:26.711054923Z" level=info msg="CreateContainer within sandbox \"4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0\"" Jul 2 07:58:26.712366 env[1216]: time="2024-07-02T07:58:26.712312071Z" level=info msg="StartContainer for \"44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0\"" Jul 2 07:58:26.735195 systemd[1]: Started cri-containerd-44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0.scope. Jul 2 07:58:26.757214 systemd[1]: cri-containerd-44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0.scope: Deactivated successfully. Jul 2 07:58:26.779851 env[1216]: time="2024-07-02T07:58:26.779735826Z" level=info msg="shim disconnected" id=44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0 Jul 2 07:58:26.779851 env[1216]: time="2024-07-02T07:58:26.779850407Z" level=warning msg="cleaning up after shim disconnected" id=44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0 namespace=k8s.io Jul 2 07:58:26.780243 env[1216]: time="2024-07-02T07:58:26.779867832Z" level=info msg="cleaning up dead shim" Jul 2 07:58:26.791803 env[1216]: time="2024-07-02T07:58:26.791678749Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3112 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:58:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 07:58:26.792206 env[1216]: time="2024-07-02T07:58:26.792064206Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Jul 2 07:58:26.792545 env[1216]: time="2024-07-02T07:58:26.792489014Z" level=error msg="Failed to pipe stderr of container \"44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0\"" error="reading from a closed fifo" Jul 2 07:58:26.792969 env[1216]: time="2024-07-02T07:58:26.792903413Z" level=error msg="Failed to pipe stdout of container \"44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0\"" error="reading from a closed fifo" Jul 2 07:58:26.795701 env[1216]: time="2024-07-02T07:58:26.795628479Z" level=error msg="StartContainer for \"44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 07:58:26.796216 kubelet[1513]: E0702 07:58:26.796154 1513 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0" Jul 2 07:58:26.796399 kubelet[1513]: E0702 07:58:26.796373 1513 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 07:58:26.796399 kubelet[1513]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 07:58:26.796399 kubelet[1513]: rm /hostbin/cilium-mount Jul 2 07:58:26.796656 kubelet[1513]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rkqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-fdlgw_kube-system(ccc56079-f459-4c66-9e8a-945a3ce0b6f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 07:58:26.796656 kubelet[1513]: E0702 07:58:26.796426 1513 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-fdlgw" podUID="ccc56079-f459-4c66-9e8a-945a3ce0b6f8" Jul 2 07:58:26.861047 env[1216]: time="2024-07-02T07:58:26.859658540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-nsjck,Uid:81401a4e-14f5-426e-a3a2-a12754ef8129,Namespace:kube-system,Attempt:0,}" Jul 2 07:58:26.881251 env[1216]: time="2024-07-02T07:58:26.881155429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:58:26.881251 env[1216]: time="2024-07-02T07:58:26.881218401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:58:26.881585 env[1216]: time="2024-07-02T07:58:26.881536395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:58:26.882037 env[1216]: time="2024-07-02T07:58:26.881919351Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/493b0c9de60f516b5dbcb21e53ab19d150a3fad8d13badd248eff97fa0230e19 pid=3132 runtime=io.containerd.runc.v2 Jul 2 07:58:26.898837 systemd[1]: Started cri-containerd-493b0c9de60f516b5dbcb21e53ab19d150a3fad8d13badd248eff97fa0230e19.scope. Jul 2 07:58:26.958614 env[1216]: time="2024-07-02T07:58:26.958553429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-nsjck,Uid:81401a4e-14f5-426e-a3a2-a12754ef8129,Namespace:kube-system,Attempt:0,} returns sandbox id \"493b0c9de60f516b5dbcb21e53ab19d150a3fad8d13badd248eff97fa0230e19\"" Jul 2 07:58:26.960982 env[1216]: time="2024-07-02T07:58:26.960917731Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:58:27.360269 kubelet[1513]: E0702 07:58:27.360189 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:27.765353 env[1216]: time="2024-07-02T07:58:27.765189324Z" level=info msg="CreateContainer within sandbox \"4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Jul 2 07:58:27.787711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075821343.mount: Deactivated successfully. Jul 2 07:58:27.803386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3491091303.mount: Deactivated successfully. Jul 2 07:58:27.810831 env[1216]: time="2024-07-02T07:58:27.810711905Z" level=info msg="CreateContainer within sandbox \"4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41\"" Jul 2 07:58:27.812230 env[1216]: time="2024-07-02T07:58:27.812179576Z" level=info msg="StartContainer for \"ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41\"" Jul 2 07:58:27.869894 systemd[1]: Started cri-containerd-ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41.scope. Jul 2 07:58:27.894536 systemd[1]: cri-containerd-ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41.scope: Deactivated successfully. Jul 2 07:58:27.941329 env[1216]: time="2024-07-02T07:58:27.941251062Z" level=info msg="shim disconnected" id=ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41 Jul 2 07:58:27.941329 env[1216]: time="2024-07-02T07:58:27.941326946Z" level=warning msg="cleaning up after shim disconnected" id=ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41 namespace=k8s.io Jul 2 07:58:27.941329 env[1216]: time="2024-07-02T07:58:27.941341475Z" level=info msg="cleaning up dead shim" Jul 2 07:58:27.968041 env[1216]: time="2024-07-02T07:58:27.967968922Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3189 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:58:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 07:58:27.968426 env[1216]: time="2024-07-02T07:58:27.968334264Z" level=error msg="copy shim log" error="read /proc/self/fd/92: file already closed" Jul 2 07:58:27.968722 env[1216]: time="2024-07-02T07:58:27.968671139Z" level=error msg="Failed to pipe stdout of container \"ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41\"" error="reading from a closed fifo" Jul 2 07:58:27.968982 env[1216]: time="2024-07-02T07:58:27.968933428Z" level=error msg="Failed to pipe stderr of container \"ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41\"" error="reading from a closed fifo" Jul 2 07:58:27.972007 env[1216]: time="2024-07-02T07:58:27.971910025Z" level=error msg="StartContainer for \"ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 07:58:27.973096 kubelet[1513]: E0702 07:58:27.972357 1513 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41" Jul 2 07:58:27.973096 kubelet[1513]: E0702 07:58:27.972524 1513 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 07:58:27.973096 kubelet[1513]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 07:58:27.973096 kubelet[1513]: rm /hostbin/cilium-mount Jul 2 07:58:27.973096 kubelet[1513]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rkqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-fdlgw_kube-system(ccc56079-f459-4c66-9e8a-945a3ce0b6f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 07:58:27.973096 kubelet[1513]: E0702 07:58:27.972569 1513 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-fdlgw" podUID="ccc56079-f459-4c66-9e8a-945a3ce0b6f8" Jul 2 07:58:28.361206 kubelet[1513]: E0702 07:58:28.361103 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:28.469737 kubelet[1513]: E0702 07:58:28.469638 1513 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:58:28.727270 env[1216]: time="2024-07-02T07:58:28.727084632Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:58:28.730570 env[1216]: time="2024-07-02T07:58:28.730503971Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:58:28.733815 env[1216]: time="2024-07-02T07:58:28.733734484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:58:28.734628 env[1216]: time="2024-07-02T07:58:28.734546068Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:58:28.738565 env[1216]: time="2024-07-02T07:58:28.738468106Z" level=info msg="CreateContainer within sandbox \"493b0c9de60f516b5dbcb21e53ab19d150a3fad8d13badd248eff97fa0230e19\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:58:28.765609 env[1216]: time="2024-07-02T07:58:28.765521764Z" level=info msg="CreateContainer within sandbox \"493b0c9de60f516b5dbcb21e53ab19d150a3fad8d13badd248eff97fa0230e19\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d1996b214eb3e36d693aedae923a0b1c2bef31ff237c5ae3e52a1e508444e547\"" Jul 2 07:58:28.766627 kubelet[1513]: I0702 07:58:28.766599 1513 scope.go:117] "RemoveContainer" containerID="44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0" Jul 2 07:58:28.766845 env[1216]: time="2024-07-02T07:58:28.766807172Z" level=info msg="StopPodSandbox for \"4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e\"" Jul 2 07:58:28.767098 env[1216]: time="2024-07-02T07:58:28.767035471Z" level=info msg="Container to stop \"44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:28.767298 env[1216]: time="2024-07-02T07:58:28.767267159Z" level=info msg="Container to stop \"ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:28.770771 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e-shm.mount: Deactivated successfully. Jul 2 07:58:28.772600 env[1216]: time="2024-07-02T07:58:28.767415339Z" level=info msg="StartContainer for \"d1996b214eb3e36d693aedae923a0b1c2bef31ff237c5ae3e52a1e508444e547\"" Jul 2 07:58:28.775340 env[1216]: time="2024-07-02T07:58:28.775295103Z" level=info msg="RemoveContainer for \"44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0\"" Jul 2 07:58:28.784166 env[1216]: time="2024-07-02T07:58:28.784102888Z" level=info msg="RemoveContainer for \"44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0\" returns successfully" Jul 2 07:58:28.798698 systemd[1]: cri-containerd-4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e.scope: Deactivated successfully. Jul 2 07:58:28.813464 systemd[1]: Started cri-containerd-d1996b214eb3e36d693aedae923a0b1c2bef31ff237c5ae3e52a1e508444e547.scope. Jul 2 07:58:28.946188 env[1216]: time="2024-07-02T07:58:28.946124396Z" level=info msg="StartContainer for \"d1996b214eb3e36d693aedae923a0b1c2bef31ff237c5ae3e52a1e508444e547\" returns successfully" Jul 2 07:58:29.023601 env[1216]: time="2024-07-02T07:58:29.023432616Z" level=info msg="shim disconnected" id=4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e Jul 2 07:58:29.023601 env[1216]: time="2024-07-02T07:58:29.023503708Z" level=warning msg="cleaning up after shim disconnected" id=4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e namespace=k8s.io Jul 2 07:58:29.023601 env[1216]: time="2024-07-02T07:58:29.023519488Z" level=info msg="cleaning up dead shim" Jul 2 07:58:29.042235 env[1216]: time="2024-07-02T07:58:29.042168990Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3259 runtime=io.containerd.runc.v2\n" Jul 2 07:58:29.042667 env[1216]: time="2024-07-02T07:58:29.042621543Z" level=info msg="TearDown network for sandbox \"4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e\" successfully" Jul 2 07:58:29.042808 env[1216]: time="2024-07-02T07:58:29.042667941Z" level=info msg="StopPodSandbox for \"4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e\" returns successfully" Jul 2 07:58:29.219883 kubelet[1513]: I0702 07:58:29.219828 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-lib-modules\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.219883 kubelet[1513]: I0702 07:58:29.219886 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-xtables-lock\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220208 kubelet[1513]: I0702 07:58:29.219918 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-hubble-tls\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220208 kubelet[1513]: I0702 07:58:29.219946 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-host-proc-sys-net\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220208 kubelet[1513]: I0702 07:58:29.219969 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-run\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220208 kubelet[1513]: I0702 07:58:29.219994 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cni-path\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220208 kubelet[1513]: I0702 07:58:29.220020 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-ipsec-secrets\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220208 kubelet[1513]: I0702 07:58:29.220044 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-clustermesh-secrets\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220208 kubelet[1513]: I0702 07:58:29.220070 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-cgroup\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220208 kubelet[1513]: I0702 07:58:29.220091 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-bpf-maps\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220208 kubelet[1513]: I0702 07:58:29.220115 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-host-proc-sys-kernel\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220208 kubelet[1513]: I0702 07:58:29.220145 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-config-path\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220208 kubelet[1513]: I0702 07:58:29.220173 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-hostproc\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220885 kubelet[1513]: I0702 07:58:29.220200 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-etc-cni-netd\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.220885 kubelet[1513]: I0702 07:58:29.220261 1513 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rkqw\" (UniqueName: \"kubernetes.io/projected/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-kube-api-access-9rkqw\") pod \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\" (UID: \"ccc56079-f459-4c66-9e8a-945a3ce0b6f8\") " Jul 2 07:58:29.221108 kubelet[1513]: I0702 07:58:29.221056 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:29.224917 kubelet[1513]: I0702 07:58:29.221644 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:29.225141 kubelet[1513]: I0702 07:58:29.221672 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:29.225285 kubelet[1513]: I0702 07:58:29.221690 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:29.225428 kubelet[1513]: I0702 07:58:29.221899 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:29.225553 kubelet[1513]: I0702 07:58:29.224822 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-hostproc" (OuterVolumeSpecName: "hostproc") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:29.225682 kubelet[1513]: I0702 07:58:29.224860 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:29.226154 kubelet[1513]: I0702 07:58:29.226106 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:58:29.226270 kubelet[1513]: I0702 07:58:29.226172 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:29.226270 kubelet[1513]: I0702 07:58:29.226203 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:29.226270 kubelet[1513]: I0702 07:58:29.226229 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cni-path" (OuterVolumeSpecName: "cni-path") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:29.229197 kubelet[1513]: I0702 07:58:29.229159 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-kube-api-access-9rkqw" (OuterVolumeSpecName: "kube-api-access-9rkqw") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "kube-api-access-9rkqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:29.229952 kubelet[1513]: I0702 07:58:29.229916 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:58:29.233192 kubelet[1513]: I0702 07:58:29.233151 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:58:29.233986 kubelet[1513]: I0702 07:58:29.233922 1513 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ccc56079-f459-4c66-9e8a-945a3ce0b6f8" (UID: "ccc56079-f459-4c66-9e8a-945a3ce0b6f8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:29.321110 kubelet[1513]: I0702 07:58:29.321042 1513 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-host-proc-sys-net\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321110 kubelet[1513]: I0702 07:58:29.321098 1513 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-lib-modules\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321110 kubelet[1513]: I0702 07:58:29.321114 1513 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-xtables-lock\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321110 kubelet[1513]: I0702 07:58:29.321126 1513 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-hubble-tls\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321556 kubelet[1513]: I0702 07:58:29.321141 1513 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-run\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321556 kubelet[1513]: I0702 07:58:29.321153 1513 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-cgroup\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321556 kubelet[1513]: I0702 07:58:29.321165 1513 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cni-path\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321556 kubelet[1513]: I0702 07:58:29.321178 1513 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-ipsec-secrets\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321556 kubelet[1513]: I0702 07:58:29.321215 1513 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-clustermesh-secrets\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321556 kubelet[1513]: I0702 07:58:29.321228 1513 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-host-proc-sys-kernel\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321556 kubelet[1513]: I0702 07:58:29.321241 1513 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-bpf-maps\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321556 kubelet[1513]: I0702 07:58:29.321253 1513 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9rkqw\" (UniqueName: \"kubernetes.io/projected/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-kube-api-access-9rkqw\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321556 kubelet[1513]: I0702 07:58:29.321271 1513 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-cilium-config-path\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321556 kubelet[1513]: I0702 07:58:29.321289 1513 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-hostproc\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.321556 kubelet[1513]: I0702 07:58:29.321302 1513 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ccc56079-f459-4c66-9e8a-945a3ce0b6f8-etc-cni-netd\") on node \"10.128.0.79\" DevicePath \"\"" Jul 2 07:58:29.361635 kubelet[1513]: E0702 07:58:29.361566 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:29.504287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e6f0bc40e21990ae1247688ad84d2e54dad230f0e421b7634d884d1cb717b8e-rootfs.mount: Deactivated successfully. Jul 2 07:58:29.504558 systemd[1]: var-lib-kubelet-pods-ccc56079\x2df459\x2d4c66\x2d9e8a\x2d945a3ce0b6f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9rkqw.mount: Deactivated successfully. Jul 2 07:58:29.504686 systemd[1]: var-lib-kubelet-pods-ccc56079\x2df459\x2d4c66\x2d9e8a\x2d945a3ce0b6f8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 07:58:29.504812 systemd[1]: var-lib-kubelet-pods-ccc56079\x2df459\x2d4c66\x2d9e8a\x2d945a3ce0b6f8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:58:29.504943 systemd[1]: var-lib-kubelet-pods-ccc56079\x2df459\x2d4c66\x2d9e8a\x2d945a3ce0b6f8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:58:29.568997 systemd[1]: Removed slice kubepods-burstable-podccc56079_f459_4c66_9e8a_945a3ce0b6f8.slice. Jul 2 07:58:29.770499 kubelet[1513]: I0702 07:58:29.770370 1513 scope.go:117] "RemoveContainer" containerID="ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41" Jul 2 07:58:29.775463 env[1216]: time="2024-07-02T07:58:29.775408506Z" level=info msg="RemoveContainer for \"ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41\"" Jul 2 07:58:29.781588 env[1216]: time="2024-07-02T07:58:29.781531157Z" level=info msg="RemoveContainer for \"ade368d74a544409801c1c8b08d16ba839b34db4733f951a63700c7264e3ed41\" returns successfully" Jul 2 07:58:29.809307 kubelet[1513]: I0702 07:58:29.809162 1513 topology_manager.go:215] "Topology Admit Handler" podUID="5422adb2-7d9c-4bf9-a171-9e8f0165e194" podNamespace="kube-system" podName="cilium-krkv8" Jul 2 07:58:29.809307 kubelet[1513]: E0702 07:58:29.809312 1513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ccc56079-f459-4c66-9e8a-945a3ce0b6f8" containerName="mount-cgroup" Jul 2 07:58:29.809621 kubelet[1513]: E0702 07:58:29.809331 1513 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ccc56079-f459-4c66-9e8a-945a3ce0b6f8" containerName="mount-cgroup" Jul 2 07:58:29.809621 kubelet[1513]: I0702 07:58:29.809363 1513 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc56079-f459-4c66-9e8a-945a3ce0b6f8" containerName="mount-cgroup" Jul 2 07:58:29.809621 kubelet[1513]: I0702 07:58:29.809426 1513 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc56079-f459-4c66-9e8a-945a3ce0b6f8" containerName="mount-cgroup" Jul 2 07:58:29.816887 systemd[1]: Created slice kubepods-burstable-pod5422adb2_7d9c_4bf9_a171_9e8f0165e194.slice. Jul 2 07:58:29.866317 kubelet[1513]: I0702 07:58:29.866237 1513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-nsjck" podStartSLOduration=2.090541492 podStartE2EDuration="3.866206765s" podCreationTimestamp="2024-07-02 07:58:26 +0000 UTC" firstStartedPulling="2024-07-02 07:58:26.960397926 +0000 UTC m=+64.618500668" lastFinishedPulling="2024-07-02 07:58:28.736063188 +0000 UTC m=+66.394165941" observedRunningTime="2024-07-02 07:58:29.818624845 +0000 UTC m=+67.476727593" watchObservedRunningTime="2024-07-02 07:58:29.866206765 +0000 UTC m=+67.524309507" Jul 2 07:58:29.885885 kubelet[1513]: W0702 07:58:29.885819 1513 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccc56079_f459_4c66_9e8a_945a3ce0b6f8.slice/cri-containerd-44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0.scope WatchSource:0}: container "44661f679993aec95b49a699ef5714e19c4b8063216c382873b2b264a7d8bcb0" in namespace "k8s.io": not found Jul 2 07:58:29.925769 kubelet[1513]: I0702 07:58:29.925696 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5422adb2-7d9c-4bf9-a171-9e8f0165e194-bpf-maps\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.925996 kubelet[1513]: I0702 07:58:29.925785 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5422adb2-7d9c-4bf9-a171-9e8f0165e194-cni-path\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.925996 kubelet[1513]: I0702 07:58:29.925817 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5422adb2-7d9c-4bf9-a171-9e8f0165e194-etc-cni-netd\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.925996 kubelet[1513]: I0702 07:58:29.925844 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5422adb2-7d9c-4bf9-a171-9e8f0165e194-cilium-config-path\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.925996 kubelet[1513]: I0702 07:58:29.925872 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5422adb2-7d9c-4bf9-a171-9e8f0165e194-hubble-tls\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.925996 kubelet[1513]: I0702 07:58:29.925897 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjxq8\" (UniqueName: \"kubernetes.io/projected/5422adb2-7d9c-4bf9-a171-9e8f0165e194-kube-api-access-bjxq8\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.925996 kubelet[1513]: I0702 07:58:29.925921 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5422adb2-7d9c-4bf9-a171-9e8f0165e194-lib-modules\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.925996 kubelet[1513]: I0702 07:58:29.925946 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5422adb2-7d9c-4bf9-a171-9e8f0165e194-xtables-lock\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.925996 kubelet[1513]: I0702 07:58:29.925972 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5422adb2-7d9c-4bf9-a171-9e8f0165e194-host-proc-sys-net\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.926423 kubelet[1513]: I0702 07:58:29.925997 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5422adb2-7d9c-4bf9-a171-9e8f0165e194-cilium-ipsec-secrets\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.926423 kubelet[1513]: I0702 07:58:29.926024 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5422adb2-7d9c-4bf9-a171-9e8f0165e194-cilium-run\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.926423 kubelet[1513]: I0702 07:58:29.926052 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5422adb2-7d9c-4bf9-a171-9e8f0165e194-hostproc\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.926423 kubelet[1513]: I0702 07:58:29.926079 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5422adb2-7d9c-4bf9-a171-9e8f0165e194-cilium-cgroup\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.926423 kubelet[1513]: I0702 07:58:29.926104 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5422adb2-7d9c-4bf9-a171-9e8f0165e194-clustermesh-secrets\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:29.926423 kubelet[1513]: I0702 07:58:29.926131 1513 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5422adb2-7d9c-4bf9-a171-9e8f0165e194-host-proc-sys-kernel\") pod \"cilium-krkv8\" (UID: \"5422adb2-7d9c-4bf9-a171-9e8f0165e194\") " pod="kube-system/cilium-krkv8" Jul 2 07:58:30.125649 env[1216]: time="2024-07-02T07:58:30.125584225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-krkv8,Uid:5422adb2-7d9c-4bf9-a171-9e8f0165e194,Namespace:kube-system,Attempt:0,}" Jul 2 07:58:30.154921 env[1216]: time="2024-07-02T07:58:30.154803914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:58:30.154921 env[1216]: time="2024-07-02T07:58:30.154863482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:58:30.155249 env[1216]: time="2024-07-02T07:58:30.154902317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:58:30.155621 env[1216]: time="2024-07-02T07:58:30.155541714Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49 pid=3288 runtime=io.containerd.runc.v2 Jul 2 07:58:30.174924 systemd[1]: Started cri-containerd-00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49.scope. Jul 2 07:58:30.211529 env[1216]: time="2024-07-02T07:58:30.211452067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-krkv8,Uid:5422adb2-7d9c-4bf9-a171-9e8f0165e194,Namespace:kube-system,Attempt:0,} returns sandbox id \"00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49\"" Jul 2 07:58:30.216075 env[1216]: time="2024-07-02T07:58:30.216022108Z" level=info msg="CreateContainer within sandbox \"00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:58:30.233476 env[1216]: time="2024-07-02T07:58:30.233389498Z" level=info msg="CreateContainer within sandbox \"00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e8d9907ce7ad0d12f990cba437528001da1af8c93aebd401e56ca2e841e5b36\"" Jul 2 07:58:30.234273 env[1216]: time="2024-07-02T07:58:30.234218065Z" level=info msg="StartContainer for \"9e8d9907ce7ad0d12f990cba437528001da1af8c93aebd401e56ca2e841e5b36\"" Jul 2 07:58:30.257041 systemd[1]: Started cri-containerd-9e8d9907ce7ad0d12f990cba437528001da1af8c93aebd401e56ca2e841e5b36.scope. Jul 2 07:58:30.300875 env[1216]: time="2024-07-02T07:58:30.300809208Z" level=info msg="StartContainer for \"9e8d9907ce7ad0d12f990cba437528001da1af8c93aebd401e56ca2e841e5b36\" returns successfully" Jul 2 07:58:30.311482 systemd[1]: cri-containerd-9e8d9907ce7ad0d12f990cba437528001da1af8c93aebd401e56ca2e841e5b36.scope: Deactivated successfully. Jul 2 07:58:30.349079 env[1216]: time="2024-07-02T07:58:30.349004850Z" level=info msg="shim disconnected" id=9e8d9907ce7ad0d12f990cba437528001da1af8c93aebd401e56ca2e841e5b36 Jul 2 07:58:30.349079 env[1216]: time="2024-07-02T07:58:30.349073777Z" level=warning msg="cleaning up after shim disconnected" id=9e8d9907ce7ad0d12f990cba437528001da1af8c93aebd401e56ca2e841e5b36 namespace=k8s.io Jul 2 07:58:30.349079 env[1216]: time="2024-07-02T07:58:30.349089863Z" level=info msg="cleaning up dead shim" Jul 2 07:58:30.362391 env[1216]: time="2024-07-02T07:58:30.362327238Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3371 runtime=io.containerd.runc.v2\n" Jul 2 07:58:30.362590 kubelet[1513]: E0702 07:58:30.362399 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:30.787193 env[1216]: time="2024-07-02T07:58:30.787075048Z" level=info msg="CreateContainer within sandbox \"00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:58:30.810707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1786498041.mount: Deactivated successfully. Jul 2 07:58:30.813258 env[1216]: time="2024-07-02T07:58:30.813141156Z" level=info msg="CreateContainer within sandbox \"00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a089bd3e52945bb38efe41b94c27b6076c75beae5f3e962057580d62cdd393d6\"" Jul 2 07:58:30.814614 env[1216]: time="2024-07-02T07:58:30.814512215Z" level=info msg="StartContainer for \"a089bd3e52945bb38efe41b94c27b6076c75beae5f3e962057580d62cdd393d6\"" Jul 2 07:58:30.854441 systemd[1]: Started cri-containerd-a089bd3e52945bb38efe41b94c27b6076c75beae5f3e962057580d62cdd393d6.scope. Jul 2 07:58:30.898297 env[1216]: time="2024-07-02T07:58:30.898231808Z" level=info msg="StartContainer for \"a089bd3e52945bb38efe41b94c27b6076c75beae5f3e962057580d62cdd393d6\" returns successfully" Jul 2 07:58:30.906748 systemd[1]: cri-containerd-a089bd3e52945bb38efe41b94c27b6076c75beae5f3e962057580d62cdd393d6.scope: Deactivated successfully. Jul 2 07:58:30.948482 env[1216]: time="2024-07-02T07:58:30.948413358Z" level=info msg="shim disconnected" id=a089bd3e52945bb38efe41b94c27b6076c75beae5f3e962057580d62cdd393d6 Jul 2 07:58:30.949051 env[1216]: time="2024-07-02T07:58:30.948987660Z" level=warning msg="cleaning up after shim disconnected" id=a089bd3e52945bb38efe41b94c27b6076c75beae5f3e962057580d62cdd393d6 namespace=k8s.io Jul 2 07:58:30.949051 env[1216]: time="2024-07-02T07:58:30.949030084Z" level=info msg="cleaning up dead shim" Jul 2 07:58:30.960785 env[1216]: time="2024-07-02T07:58:30.960684780Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3436 runtime=io.containerd.runc.v2\n" Jul 2 07:58:31.363636 kubelet[1513]: E0702 07:58:31.363562 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:31.504716 systemd[1]: run-containerd-runc-k8s.io-a089bd3e52945bb38efe41b94c27b6076c75beae5f3e962057580d62cdd393d6-runc.MCLJWq.mount: Deactivated successfully. Jul 2 07:58:31.504890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a089bd3e52945bb38efe41b94c27b6076c75beae5f3e962057580d62cdd393d6-rootfs.mount: Deactivated successfully. Jul 2 07:58:31.564086 kubelet[1513]: I0702 07:58:31.564031 1513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc56079-f459-4c66-9e8a-945a3ce0b6f8" path="/var/lib/kubelet/pods/ccc56079-f459-4c66-9e8a-945a3ce0b6f8/volumes" Jul 2 07:58:31.790848 env[1216]: time="2024-07-02T07:58:31.790322989Z" level=info msg="CreateContainer within sandbox \"00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:58:31.822119 env[1216]: time="2024-07-02T07:58:31.821988161Z" level=info msg="CreateContainer within sandbox \"00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"44fc0918bbacbda9ca18827689222f8c014e5ffb306f18402453d71125df059f\"" Jul 2 07:58:31.822968 env[1216]: time="2024-07-02T07:58:31.822920324Z" level=info msg="StartContainer for \"44fc0918bbacbda9ca18827689222f8c014e5ffb306f18402453d71125df059f\"" Jul 2 07:58:31.863082 systemd[1]: Started cri-containerd-44fc0918bbacbda9ca18827689222f8c014e5ffb306f18402453d71125df059f.scope. Jul 2 07:58:31.909937 systemd[1]: cri-containerd-44fc0918bbacbda9ca18827689222f8c014e5ffb306f18402453d71125df059f.scope: Deactivated successfully. Jul 2 07:58:31.911867 env[1216]: time="2024-07-02T07:58:31.911808387Z" level=info msg="StartContainer for \"44fc0918bbacbda9ca18827689222f8c014e5ffb306f18402453d71125df059f\" returns successfully" Jul 2 07:58:31.946018 env[1216]: time="2024-07-02T07:58:31.945944601Z" level=info msg="shim disconnected" id=44fc0918bbacbda9ca18827689222f8c014e5ffb306f18402453d71125df059f Jul 2 07:58:31.946479 env[1216]: time="2024-07-02T07:58:31.946429608Z" level=warning msg="cleaning up after shim disconnected" id=44fc0918bbacbda9ca18827689222f8c014e5ffb306f18402453d71125df059f namespace=k8s.io Jul 2 07:58:31.946479 env[1216]: time="2024-07-02T07:58:31.946457417Z" level=info msg="cleaning up dead shim" Jul 2 07:58:31.958054 env[1216]: time="2024-07-02T07:58:31.957984935Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3492 runtime=io.containerd.runc.v2\n" Jul 2 07:58:32.364854 kubelet[1513]: E0702 07:58:32.364779 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:32.504834 systemd[1]: run-containerd-runc-k8s.io-44fc0918bbacbda9ca18827689222f8c014e5ffb306f18402453d71125df059f-runc.5032aM.mount: Deactivated successfully. Jul 2 07:58:32.504996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44fc0918bbacbda9ca18827689222f8c014e5ffb306f18402453d71125df059f-rootfs.mount: Deactivated successfully. Jul 2 07:58:32.798476 env[1216]: time="2024-07-02T07:58:32.798405375Z" level=info msg="CreateContainer within sandbox \"00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:58:32.821752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount48278116.mount: Deactivated successfully. Jul 2 07:58:32.832564 env[1216]: time="2024-07-02T07:58:32.832490825Z" level=info msg="CreateContainer within sandbox \"00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a9d914b14048c126e11537a57fb809c22427a66b560c216d6e563ecb9ef93569\"" Jul 2 07:58:32.833628 env[1216]: time="2024-07-02T07:58:32.833586100Z" level=info msg="StartContainer for \"a9d914b14048c126e11537a57fb809c22427a66b560c216d6e563ecb9ef93569\"" Jul 2 07:58:32.859052 systemd[1]: Started cri-containerd-a9d914b14048c126e11537a57fb809c22427a66b560c216d6e563ecb9ef93569.scope. Jul 2 07:58:32.903350 systemd[1]: cri-containerd-a9d914b14048c126e11537a57fb809c22427a66b560c216d6e563ecb9ef93569.scope: Deactivated successfully. Jul 2 07:58:32.908154 env[1216]: time="2024-07-02T07:58:32.908077626Z" level=info msg="StartContainer for \"a9d914b14048c126e11537a57fb809c22427a66b560c216d6e563ecb9ef93569\" returns successfully" Jul 2 07:58:32.939533 env[1216]: time="2024-07-02T07:58:32.939463624Z" level=info msg="shim disconnected" id=a9d914b14048c126e11537a57fb809c22427a66b560c216d6e563ecb9ef93569 Jul 2 07:58:32.939533 env[1216]: time="2024-07-02T07:58:32.939531469Z" level=warning msg="cleaning up after shim disconnected" id=a9d914b14048c126e11537a57fb809c22427a66b560c216d6e563ecb9ef93569 namespace=k8s.io Jul 2 07:58:32.939958 env[1216]: time="2024-07-02T07:58:32.939546743Z" level=info msg="cleaning up dead shim" Jul 2 07:58:32.954320 env[1216]: time="2024-07-02T07:58:32.954231272Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3548 runtime=io.containerd.runc.v2\n" Jul 2 07:58:33.365309 kubelet[1513]: E0702 07:58:33.365226 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:33.470618 kubelet[1513]: E0702 07:58:33.470566 1513 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:58:33.504833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9d914b14048c126e11537a57fb809c22427a66b560c216d6e563ecb9ef93569-rootfs.mount: Deactivated successfully. Jul 2 07:58:33.807791 env[1216]: time="2024-07-02T07:58:33.803216094Z" level=info msg="CreateContainer within sandbox \"00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:58:33.829601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3869691433.mount: Deactivated successfully. Jul 2 07:58:33.843310 env[1216]: time="2024-07-02T07:58:33.843222966Z" level=info msg="CreateContainer within sandbox \"00c7274dc7d9a549a6f76af2646494a439bce06b455a821201c85cb255419f49\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f5a6007608e318c772bc574e6a9633e503e38f296a831e5231ad736452560a3e\"" Jul 2 07:58:33.844537 env[1216]: time="2024-07-02T07:58:33.844490583Z" level=info msg="StartContainer for \"f5a6007608e318c772bc574e6a9633e503e38f296a831e5231ad736452560a3e\"" Jul 2 07:58:33.871009 systemd[1]: Started cri-containerd-f5a6007608e318c772bc574e6a9633e503e38f296a831e5231ad736452560a3e.scope. Jul 2 07:58:33.923876 env[1216]: time="2024-07-02T07:58:33.923798130Z" level=info msg="StartContainer for \"f5a6007608e318c772bc574e6a9633e503e38f296a831e5231ad736452560a3e\" returns successfully" Jul 2 07:58:34.366192 kubelet[1513]: E0702 07:58:34.366096 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:34.383825 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:58:34.825627 kubelet[1513]: I0702 07:58:34.825550 1513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-krkv8" podStartSLOduration=5.825525381 podStartE2EDuration="5.825525381s" podCreationTimestamp="2024-07-02 07:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:58:34.825438948 +0000 UTC m=+72.483541700" watchObservedRunningTime="2024-07-02 07:58:34.825525381 +0000 UTC m=+72.483628132" Jul 2 07:58:35.367140 kubelet[1513]: E0702 07:58:35.367063 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:36.367543 kubelet[1513]: E0702 07:58:36.367471 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:37.368414 kubelet[1513]: E0702 07:58:37.368366 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:37.414940 systemd-networkd[1024]: lxc_health: Link UP Jul 2 07:58:37.440847 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:58:37.446725 systemd-networkd[1024]: lxc_health: Gained carrier Jul 2 07:58:38.158021 systemd[1]: run-containerd-runc-k8s.io-f5a6007608e318c772bc574e6a9633e503e38f296a831e5231ad736452560a3e-runc.Fm1yQK.mount: Deactivated successfully. Jul 2 07:58:38.369555 kubelet[1513]: E0702 07:58:38.369498 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:39.370140 kubelet[1513]: E0702 07:58:39.370082 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:39.417090 systemd-networkd[1024]: lxc_health: Gained IPv6LL Jul 2 07:58:40.370998 kubelet[1513]: E0702 07:58:40.370941 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:40.449816 systemd[1]: run-containerd-runc-k8s.io-f5a6007608e318c772bc574e6a9633e503e38f296a831e5231ad736452560a3e-runc.H2JG7b.mount: Deactivated successfully. Jul 2 07:58:41.372505 kubelet[1513]: E0702 07:58:41.372449 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:42.374100 kubelet[1513]: E0702 07:58:42.374049 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:42.700079 systemd[1]: run-containerd-runc-k8s.io-f5a6007608e318c772bc574e6a9633e503e38f296a831e5231ad736452560a3e-runc.1CNwyy.mount: Deactivated successfully. Jul 2 07:58:43.313152 kubelet[1513]: E0702 07:58:43.313100 1513 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:43.375525 kubelet[1513]: E0702 07:58:43.375472 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:44.376951 kubelet[1513]: E0702 07:58:44.376829 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:45.377416 kubelet[1513]: E0702 07:58:45.377341 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 07:58:46.378580 kubelet[1513]: E0702 07:58:46.378507 1513 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"