Dec 13 02:18:38.071389 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:18:38.071431 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:18:38.071478 kernel: BIOS-provided physical RAM map: Dec 13 02:18:38.071494 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 02:18:38.071506 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 02:18:38.071518 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 02:18:38.071540 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 02:18:38.071553 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 02:18:38.071567 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable Dec 13 02:18:38.071580 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data Dec 13 02:18:38.071594 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable Dec 13 02:18:38.071608 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 02:18:38.071622 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 02:18:38.071636 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 02:18:38.071658 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 02:18:38.071673 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 02:18:38.071687 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 02:18:38.071703 kernel: NX (Execute Disable) protection: active Dec 13 02:18:38.071718 kernel: efi: EFI v2.70 by EDK II Dec 13 02:18:38.071733 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 Dec 13 02:18:38.071749 kernel: random: crng init done Dec 13 02:18:38.071764 kernel: SMBIOS 2.4 present. Dec 13 02:18:38.071783 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 02:18:38.071798 kernel: Hypervisor detected: KVM Dec 13 02:18:38.071813 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:18:38.071828 kernel: kvm-clock: cpu 0, msr 15b19b001, primary cpu clock Dec 13 02:18:38.071843 kernel: kvm-clock: using sched offset of 12543607989 cycles Dec 13 02:18:38.071859 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:18:38.071873 kernel: tsc: Detected 2299.998 MHz processor Dec 13 02:18:38.071888 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:18:38.071904 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:18:38.071919 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 02:18:38.071939 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:18:38.071954 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 02:18:38.071969 kernel: Using GB pages for direct mapping Dec 13 02:18:38.071985 kernel: Secure boot disabled Dec 13 02:18:38.072000 kernel: ACPI: Early table checksum verification disabled Dec 13 02:18:38.072016 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 02:18:38.072996 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 02:18:38.073023 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 02:18:38.073052 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 02:18:38.073069 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 02:18:38.073085 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 02:18:38.073102 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 02:18:38.073118 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 02:18:38.073135 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 02:18:38.073155 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 02:18:38.073172 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 02:18:38.073189 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 02:18:38.073206 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 02:18:38.073223 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 02:18:38.073240 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 02:18:38.073269 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 02:18:38.073286 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 02:18:38.073303 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 02:18:38.073324 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 02:18:38.073341 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 02:18:38.073358 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:18:38.073374 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:18:38.073391 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 02:18:38.073407 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 02:18:38.073424 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 02:18:38.081066 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 02:18:38.081101 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 02:18:38.081127 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 02:18:38.081145 kernel: Zone ranges: Dec 13 02:18:38.081162 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:18:38.081179 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 02:18:38.081196 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:18:38.081213 kernel: Movable zone start for each node Dec 13 02:18:38.081229 kernel: Early memory node ranges Dec 13 02:18:38.081253 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 02:18:38.081270 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 02:18:38.081291 kernel: node 0: [mem 0x0000000000100000-0x00000000bd276fff] Dec 13 02:18:38.081308 kernel: node 0: [mem 0x00000000bd281000-0x00000000bf8ecfff] Dec 13 02:18:38.081325 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 02:18:38.081342 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:18:38.081358 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 02:18:38.081374 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:18:38.081391 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 02:18:38.081408 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 02:18:38.081425 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 13 02:18:38.083260 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 02:18:38.083287 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 02:18:38.083303 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:18:38.083319 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:18:38.083335 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:18:38.083351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:18:38.083368 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:18:38.083385 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:18:38.083401 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:18:38.083422 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:18:38.083439 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:18:38.083470 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 02:18:38.083486 kernel: Booting paravirtualized kernel on KVM Dec 13 02:18:38.083502 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:18:38.083526 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:18:38.083541 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:18:38.083555 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:18:38.083570 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:18:38.083590 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:18:38.083607 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:18:38.083622 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Dec 13 02:18:38.083636 kernel: Policy zone: Normal Dec 13 02:18:38.083653 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:18:38.083669 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:18:38.083685 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 02:18:38.083700 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:18:38.083717 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:18:38.083739 kernel: Memory: 7515408K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 344876K reserved, 0K cma-reserved) Dec 13 02:18:38.083755 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:18:38.083772 kernel: Kernel/User page tables isolation: enabled Dec 13 02:18:38.083788 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:18:38.083805 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:18:38.083822 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:18:38.083839 kernel: rcu: RCU event tracing is enabled. Dec 13 02:18:38.083854 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:18:38.083874 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:18:38.083901 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:18:38.083919 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:18:38.083943 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:18:38.083961 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:18:38.083977 kernel: Console: colour dummy device 80x25 Dec 13 02:18:38.083995 kernel: printk: console [ttyS0] enabled Dec 13 02:18:38.084011 kernel: ACPI: Core revision 20210730 Dec 13 02:18:38.084028 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:18:38.084046 kernel: x2apic enabled Dec 13 02:18:38.084068 kernel: Switched APIC routing to physical x2apic. Dec 13 02:18:38.084085 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 02:18:38.084102 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:18:38.084119 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 02:18:38.084136 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 02:18:38.084153 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 02:18:38.084169 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:18:38.084191 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 02:18:38.084208 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 02:18:38.084225 kernel: Spectre V2 : Mitigation: IBRS Dec 13 02:18:38.084253 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:18:38.084271 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:18:38.084288 kernel: RETBleed: Mitigation: IBRS Dec 13 02:18:38.084305 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:18:38.084322 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Dec 13 02:18:38.084340 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 02:18:38.084361 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 02:18:38.084378 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:18:38.084395 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:18:38.084412 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:18:38.084430 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:18:38.084462 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:18:38.084479 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 02:18:38.084496 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:18:38.084513 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:18:38.084534 kernel: LSM: Security Framework initializing Dec 13 02:18:38.084551 kernel: SELinux: Initializing. Dec 13 02:18:38.084568 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:18:38.084584 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:18:38.084601 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 02:18:38.084619 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 02:18:38.084636 kernel: signal: max sigframe size: 1776 Dec 13 02:18:38.084653 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:18:38.084671 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:18:38.084692 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:18:38.084709 kernel: x86: Booting SMP configuration: Dec 13 02:18:38.084726 kernel: .... node #0, CPUs: #1 Dec 13 02:18:38.084742 kernel: kvm-clock: cpu 1, msr 15b19b041, secondary cpu clock Dec 13 02:18:38.084761 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:18:38.084779 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:18:38.084797 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:18:38.084814 kernel: smpboot: Max logical packages: 1 Dec 13 02:18:38.084835 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 02:18:38.084852 kernel: devtmpfs: initialized Dec 13 02:18:38.084869 kernel: x86/mm: Memory block size: 128MB Dec 13 02:18:38.084885 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 02:18:38.084903 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:18:38.084920 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:18:38.084937 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:18:38.084955 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:18:38.084972 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:18:38.084993 kernel: audit: type=2000 audit(1734056316.988:1): state=initialized audit_enabled=0 res=1 Dec 13 02:18:38.085010 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:18:38.085027 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:18:38.085043 kernel: cpuidle: using governor menu Dec 13 02:18:38.085061 kernel: ACPI: bus type PCI registered Dec 13 02:18:38.085078 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:18:38.085095 kernel: dca service started, version 1.12.1 Dec 13 02:18:38.085112 kernel: PCI: Using configuration type 1 for base access Dec 13 02:18:38.085129 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:18:38.085149 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:18:38.085167 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:18:38.085184 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:18:38.085201 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:18:38.085218 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:18:38.085235 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:18:38.085261 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:18:38.085277 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:18:38.085294 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:18:38.085316 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:18:38.085332 kernel: ACPI: Interpreter enabled Dec 13 02:18:38.085349 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:18:38.085367 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:18:38.085384 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:18:38.085402 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:18:38.085419 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:18:38.092515 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:18:38.092715 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:18:38.092740 kernel: PCI host bridge to bus 0000:00 Dec 13 02:18:38.092902 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:18:38.093056 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:18:38.093205 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:18:38.093363 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 02:18:38.098864 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:18:38.099077 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:18:38.099272 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 02:18:38.099468 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 02:18:38.099641 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:18:38.099824 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 02:18:38.099985 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 02:18:38.100161 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 02:18:38.100347 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:18:38.112912 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 02:18:38.113124 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 02:18:38.113327 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:18:38.113566 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 02:18:38.113745 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 02:18:38.113774 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:18:38.113792 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:18:38.113810 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:18:38.113827 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:18:38.113844 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:18:38.113861 kernel: iommu: Default domain type: Translated Dec 13 02:18:38.113879 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:18:38.113896 kernel: vgaarb: loaded Dec 13 02:18:38.113915 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:18:38.113937 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:18:38.113955 kernel: PTP clock support registered Dec 13 02:18:38.113972 kernel: Registered efivars operations Dec 13 02:18:38.113989 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:18:38.114006 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:18:38.114022 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 02:18:38.114039 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 02:18:38.114055 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff] Dec 13 02:18:38.114072 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 02:18:38.114091 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 02:18:38.114108 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:18:38.114123 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:18:38.114140 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:18:38.114157 kernel: pnp: PnP ACPI init Dec 13 02:18:38.114172 kernel: pnp: PnP ACPI: found 7 devices Dec 13 02:18:38.114189 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:18:38.114205 kernel: NET: Registered PF_INET protocol family Dec 13 02:18:38.114222 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:18:38.114254 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 02:18:38.114271 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:18:38.114289 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:18:38.114306 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 02:18:38.114323 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 02:18:38.114341 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:18:38.114358 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:18:38.114376 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:18:38.114395 kernel: NET: Registered PF_XDP protocol family Dec 13 02:18:38.114609 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:18:38.114771 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:18:38.114917 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:18:38.115056 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 02:18:38.115220 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:18:38.115253 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:18:38.115271 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 02:18:38.115293 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 02:18:38.115309 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:18:38.115326 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:18:38.115343 kernel: clocksource: Switched to clocksource tsc Dec 13 02:18:38.115360 kernel: Initialise system trusted keyrings Dec 13 02:18:38.115377 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 02:18:38.115394 kernel: Key type asymmetric registered Dec 13 02:18:38.115411 kernel: Asymmetric key parser 'x509' registered Dec 13 02:18:38.115428 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:18:38.117512 kernel: io scheduler mq-deadline registered Dec 13 02:18:38.117543 kernel: io scheduler kyber registered Dec 13 02:18:38.117561 kernel: io scheduler bfq registered Dec 13 02:18:38.117579 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:18:38.117596 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 02:18:38.117790 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 02:18:38.117815 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 02:18:38.117980 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 02:18:38.118003 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 02:18:38.118170 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 02:18:38.118192 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:18:38.118211 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:18:38.118228 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 02:18:38.118255 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 02:18:38.118272 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 02:18:38.118462 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 02:18:38.118496 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:18:38.118520 kernel: i8042: Warning: Keylock active Dec 13 02:18:38.118537 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:18:38.118555 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:18:38.118726 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:18:38.118879 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:18:38.119028 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:18:37 UTC (1734056317) Dec 13 02:18:38.119175 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:18:38.119197 kernel: intel_pstate: CPU model not supported Dec 13 02:18:38.119219 kernel: pstore: Registered efi as persistent store backend Dec 13 02:18:38.119245 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:18:38.119263 kernel: Segment Routing with IPv6 Dec 13 02:18:38.119280 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:18:38.119297 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:18:38.119313 kernel: Key type dns_resolver registered Dec 13 02:18:38.119331 kernel: IPI shorthand broadcast: enabled Dec 13 02:18:38.119348 kernel: sched_clock: Marking stable (702005167, 133395518)->(858016550, -22615865) Dec 13 02:18:38.119366 kernel: registered taskstats version 1 Dec 13 02:18:38.119387 kernel: Loading compiled-in X.509 certificates Dec 13 02:18:38.119405 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:18:38.119423 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:18:38.119455 kernel: Key type .fscrypt registered Dec 13 02:18:38.137506 kernel: Key type fscrypt-provisioning registered Dec 13 02:18:38.137528 kernel: pstore: Using crash dump compression: deflate Dec 13 02:18:38.137547 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:18:38.137565 kernel: ima: No architecture policies found Dec 13 02:18:38.137590 kernel: clk: Disabling unused clocks Dec 13 02:18:38.137608 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:18:38.137624 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:18:38.137641 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:18:38.137658 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:18:38.137675 kernel: Run /init as init process Dec 13 02:18:38.137692 kernel: with arguments: Dec 13 02:18:38.137708 kernel: /init Dec 13 02:18:38.137725 kernel: with environment: Dec 13 02:18:38.137741 kernel: HOME=/ Dec 13 02:18:38.137763 kernel: TERM=linux Dec 13 02:18:38.137780 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:18:38.137802 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:18:38.137823 systemd[1]: Detected virtualization kvm. Dec 13 02:18:38.137842 systemd[1]: Detected architecture x86-64. Dec 13 02:18:38.137859 systemd[1]: Running in initrd. Dec 13 02:18:38.137877 systemd[1]: No hostname configured, using default hostname. Dec 13 02:18:38.137898 systemd[1]: Hostname set to . Dec 13 02:18:38.137917 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:18:38.137934 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:18:38.137952 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:18:38.137970 systemd[1]: Reached target cryptsetup.target. Dec 13 02:18:38.137989 systemd[1]: Reached target paths.target. Dec 13 02:18:38.138007 systemd[1]: Reached target slices.target. Dec 13 02:18:38.138024 systemd[1]: Reached target swap.target. Dec 13 02:18:38.138045 systemd[1]: Reached target timers.target. Dec 13 02:18:38.138064 systemd[1]: Listening on iscsid.socket. Dec 13 02:18:38.138082 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:18:38.138099 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:18:38.138118 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:18:38.138136 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:18:38.138154 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:18:38.138172 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:18:38.138194 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:18:38.138232 systemd[1]: Reached target sockets.target. Dec 13 02:18:38.138263 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:18:38.138283 systemd[1]: Finished network-cleanup.service. Dec 13 02:18:38.138302 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:18:38.138321 systemd[1]: Starting systemd-journald.service... Dec 13 02:18:38.138343 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:18:38.138363 systemd[1]: Starting systemd-resolved.service... Dec 13 02:18:38.138381 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:18:38.138400 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:18:38.138420 kernel: audit: type=1130 audit(1734056318.086:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.138439 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:18:38.139719 kernel: audit: type=1130 audit(1734056318.092:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.139742 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:18:38.139761 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:18:38.139786 kernel: audit: type=1130 audit(1734056318.118:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.139805 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:18:38.139824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:18:38.139842 kernel: audit: type=1130 audit(1734056318.132:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.139865 systemd-journald[190]: Journal started Dec 13 02:18:38.139973 systemd-journald[190]: Runtime Journal (/run/log/journal/4e9a55f4bfea6b15f643b5fb4065a0cf) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:18:38.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.097901 systemd-modules-load[191]: Inserted module 'overlay' Dec 13 02:18:38.149383 systemd[1]: Started systemd-journald.service. Dec 13 02:18:38.149463 kernel: audit: type=1130 audit(1734056318.143:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.165872 systemd-resolved[192]: Positive Trust Anchors: Dec 13 02:18:38.167007 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:18:38.167227 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:18:38.174894 systemd-resolved[192]: Defaulting to hostname 'linux'. Dec 13 02:18:38.189587 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:18:38.189625 kernel: Bridge firewalling registered Dec 13 02:18:38.176575 systemd[1]: Started systemd-resolved.service. Dec 13 02:18:38.188987 systemd-modules-load[191]: Inserted module 'br_netfilter' Dec 13 02:18:38.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.205066 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:18:38.222614 kernel: audit: type=1130 audit(1734056318.200:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.222659 kernel: audit: type=1130 audit(1734056318.207:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.222684 kernel: SCSI subsystem initialized Dec 13 02:18:38.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.208668 systemd[1]: Reached target nss-lookup.target. Dec 13 02:18:38.216922 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:18:38.235371 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:18:38.235418 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:18:38.235471 dracut-cmdline[206]: dracut-dracut-053 Dec 13 02:18:38.243049 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:18:38.242271 systemd-modules-load[191]: Inserted module 'dm_multipath' Dec 13 02:18:38.246562 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:18:38.243482 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:18:38.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.266517 kernel: audit: type=1130 audit(1734056318.262:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.266738 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:18:38.279078 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:18:38.289581 kernel: audit: type=1130 audit(1734056318.281:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.332487 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:18:38.352476 kernel: iscsi: registered transport (tcp) Dec 13 02:18:38.378823 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:18:38.378903 kernel: QLogic iSCSI HBA Driver Dec 13 02:18:38.423368 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:18:38.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.425118 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:18:38.482528 kernel: raid6: avx2x4 gen() 18394 MB/s Dec 13 02:18:38.499496 kernel: raid6: avx2x4 xor() 8307 MB/s Dec 13 02:18:38.516491 kernel: raid6: avx2x2 gen() 18304 MB/s Dec 13 02:18:38.533491 kernel: raid6: avx2x2 xor() 18625 MB/s Dec 13 02:18:38.550496 kernel: raid6: avx2x1 gen() 14201 MB/s Dec 13 02:18:38.567488 kernel: raid6: avx2x1 xor() 16218 MB/s Dec 13 02:18:38.584486 kernel: raid6: sse2x4 gen() 11071 MB/s Dec 13 02:18:38.601489 kernel: raid6: sse2x4 xor() 6750 MB/s Dec 13 02:18:38.618481 kernel: raid6: sse2x2 gen() 12071 MB/s Dec 13 02:18:38.635486 kernel: raid6: sse2x2 xor() 7450 MB/s Dec 13 02:18:38.652540 kernel: raid6: sse2x1 gen() 10431 MB/s Dec 13 02:18:38.670144 kernel: raid6: sse2x1 xor() 5151 MB/s Dec 13 02:18:38.670218 kernel: raid6: using algorithm avx2x4 gen() 18394 MB/s Dec 13 02:18:38.670242 kernel: raid6: .... xor() 8307 MB/s, rmw enabled Dec 13 02:18:38.670843 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:18:38.685483 kernel: xor: automatically using best checksumming function avx Dec 13 02:18:38.791482 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:18:38.803177 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:18:38.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.802000 audit: BPF prog-id=7 op=LOAD Dec 13 02:18:38.802000 audit: BPF prog-id=8 op=LOAD Dec 13 02:18:38.804771 systemd[1]: Starting systemd-udevd.service... Dec 13 02:18:38.821369 systemd-udevd[388]: Using default interface naming scheme 'v252'. Dec 13 02:18:38.828395 systemd[1]: Started systemd-udevd.service. Dec 13 02:18:38.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.832753 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:18:38.854212 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Dec 13 02:18:38.892823 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:18:38.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:38.894982 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:18:38.957240 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:18:38.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.044781 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:18:39.128936 kernel: scsi host0: Virtio SCSI HBA Dec 13 02:18:39.140468 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 02:18:39.153093 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:18:39.153165 kernel: AES CTR mode by8 optimization enabled Dec 13 02:18:39.198086 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 02:18:39.214136 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 02:18:39.214308 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 02:18:39.214483 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 02:18:39.214707 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:18:39.214914 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:18:39.214939 kernel: GPT:17805311 != 25165823 Dec 13 02:18:39.214962 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:18:39.214984 kernel: GPT:17805311 != 25165823 Dec 13 02:18:39.215005 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:18:39.215034 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:18:39.215057 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 02:18:39.260470 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Dec 13 02:18:39.267686 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:18:39.283133 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:18:39.297297 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:18:39.302511 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:18:39.302717 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:18:39.309806 systemd[1]: Starting disk-uuid.service... Dec 13 02:18:39.321570 disk-uuid[520]: Primary Header is updated. Dec 13 02:18:39.321570 disk-uuid[520]: Secondary Entries is updated. Dec 13 02:18:39.321570 disk-uuid[520]: Secondary Header is updated. Dec 13 02:18:39.333491 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:18:39.355494 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:18:39.363467 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:18:40.361719 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:18:40.363310 disk-uuid[521]: The operation has completed successfully. Dec 13 02:18:40.430561 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:18:40.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.430697 systemd[1]: Finished disk-uuid.service. Dec 13 02:18:40.450156 systemd[1]: Starting verity-setup.service... Dec 13 02:18:40.477465 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:18:40.554255 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:18:40.562800 systemd[1]: Finished verity-setup.service. Dec 13 02:18:40.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.578665 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:18:40.676712 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:18:40.677261 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:18:40.690735 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:18:40.691900 systemd[1]: Starting ignition-setup.service... Dec 13 02:18:40.748299 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:18:40.748342 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:18:40.748367 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:18:40.748391 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:18:40.704663 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:18:40.762350 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:18:40.781695 systemd[1]: Finished ignition-setup.service. Dec 13 02:18:40.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.790622 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:18:40.827661 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:18:40.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.827000 audit: BPF prog-id=9 op=LOAD Dec 13 02:18:40.829722 systemd[1]: Starting systemd-networkd.service... Dec 13 02:18:40.862019 systemd-networkd[695]: lo: Link UP Dec 13 02:18:40.862034 systemd-networkd[695]: lo: Gained carrier Dec 13 02:18:40.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.862808 systemd-networkd[695]: Enumeration completed Dec 13 02:18:40.862970 systemd[1]: Started systemd-networkd.service. Dec 13 02:18:40.863366 systemd-networkd[695]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:18:40.865507 systemd-networkd[695]: eth0: Link UP Dec 13 02:18:40.865515 systemd-networkd[695]: eth0: Gained carrier Dec 13 02:18:40.876575 systemd-networkd[695]: eth0: DHCPv4 address 10.128.0.79/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:18:40.877869 systemd[1]: Reached target network.target. Dec 13 02:18:40.887661 systemd[1]: Starting iscsiuio.service... Dec 13 02:18:40.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.954766 systemd[1]: Started iscsiuio.service. Dec 13 02:18:40.982591 iscsid[705]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:18:40.982591 iscsid[705]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 02:18:40.982591 iscsid[705]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:18:40.982591 iscsid[705]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:18:40.982591 iscsid[705]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:18:40.982591 iscsid[705]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:18:40.982591 iscsid[705]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:18:40.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:41.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.970007 systemd[1]: Starting iscsid.service... Dec 13 02:18:41.082675 ignition[661]: Ignition 2.14.0 Dec 13 02:18:40.989828 systemd[1]: Started iscsid.service. Dec 13 02:18:41.082689 ignition[661]: Stage: fetch-offline Dec 13 02:18:41.002419 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:18:41.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:41.082771 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:41.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:41.033108 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:18:41.082811 ignition[661]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:18:41.074790 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:18:41.103402 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:18:41.084887 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:18:41.103654 ignition[661]: parsed url from cmdline: "" Dec 13 02:18:41.110582 systemd[1]: Reached target remote-fs.target. Dec 13 02:18:41.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:41.103661 ignition[661]: no config URL provided Dec 13 02:18:41.128705 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:18:41.103669 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:18:41.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:41.155055 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:18:41.103681 ignition[661]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:18:41.169980 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:18:41.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:41.103690 ignition[661]: failed to fetch config: resource requires networking Dec 13 02:18:41.187019 systemd[1]: Starting ignition-fetch.service... Dec 13 02:18:41.104218 ignition[661]: Ignition finished successfully Dec 13 02:18:41.216517 unknown[720]: fetched base config from "system" Dec 13 02:18:41.199635 ignition[720]: Ignition 2.14.0 Dec 13 02:18:41.216527 unknown[720]: fetched base config from "system" Dec 13 02:18:41.199647 ignition[720]: Stage: fetch Dec 13 02:18:41.216534 unknown[720]: fetched user config from "gcp" Dec 13 02:18:41.199803 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:41.219010 systemd[1]: Finished ignition-fetch.service. Dec 13 02:18:41.199834 ignition[720]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:18:41.244163 systemd[1]: Starting ignition-kargs.service... Dec 13 02:18:41.207825 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:18:41.267481 systemd[1]: Finished ignition-kargs.service. Dec 13 02:18:41.208027 ignition[720]: parsed url from cmdline: "" Dec 13 02:18:41.280794 systemd[1]: Starting ignition-disks.service... Dec 13 02:18:41.208034 ignition[720]: no config URL provided Dec 13 02:18:41.306034 systemd[1]: Finished ignition-disks.service. Dec 13 02:18:41.208041 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:18:41.313942 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:18:41.208052 ignition[720]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:18:41.335633 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:18:41.208089 ignition[720]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 02:18:41.350615 systemd[1]: Reached target local-fs.target. Dec 13 02:18:41.213244 ignition[720]: GET result: OK Dec 13 02:18:41.364615 systemd[1]: Reached target sysinit.target. Dec 13 02:18:41.213303 ignition[720]: parsing config with SHA512: b26392c4c29c5887260a73251aa84c267209aa0ce3e3fa6ae99e337fb9a9a1e75e70046424baa6640c191f01d5ec837b917ab3dcd34df4f7361efd97c273d786 Dec 13 02:18:41.377589 systemd[1]: Reached target basic.target. Dec 13 02:18:41.217043 ignition[720]: fetch: fetch complete Dec 13 02:18:41.390749 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:18:41.217049 ignition[720]: fetch: fetch passed Dec 13 02:18:41.217095 ignition[720]: Ignition finished successfully Dec 13 02:18:41.256709 ignition[726]: Ignition 2.14.0 Dec 13 02:18:41.256721 ignition[726]: Stage: kargs Dec 13 02:18:41.256870 ignition[726]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:41.256900 ignition[726]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:18:41.265082 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:18:41.266310 ignition[726]: kargs: kargs passed Dec 13 02:18:41.266359 ignition[726]: Ignition finished successfully Dec 13 02:18:41.292847 ignition[732]: Ignition 2.14.0 Dec 13 02:18:41.292858 ignition[732]: Stage: disks Dec 13 02:18:41.292994 ignition[732]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:41.293019 ignition[732]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:18:41.299412 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:18:41.300643 ignition[732]: disks: disks passed Dec 13 02:18:41.300695 ignition[732]: Ignition finished successfully Dec 13 02:18:41.434608 systemd-fsck[740]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 02:18:41.619419 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:18:41.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:41.628755 systemd[1]: Mounting sysroot.mount... Dec 13 02:18:41.658616 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:18:41.655211 systemd[1]: Mounted sysroot.mount. Dec 13 02:18:41.672742 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:18:41.685882 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:18:41.690281 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:18:41.690335 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:18:41.690368 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:18:41.771490 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (746) Dec 13 02:18:41.771540 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:18:41.771564 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:18:41.771587 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:18:41.709801 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:18:41.735086 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:18:41.803595 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:18:41.784868 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:18:41.814130 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:18:41.826712 initrd-setup-root[769]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:18:41.837558 initrd-setup-root[777]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:18:41.847574 initrd-setup-root[785]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:18:41.857585 initrd-setup-root[793]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:18:41.892879 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:18:41.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:41.894129 systemd[1]: Starting ignition-mount.service... Dec 13 02:18:41.921598 systemd[1]: Starting sysroot-boot.service... Dec 13 02:18:41.930805 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:18:41.930911 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:18:41.954590 ignition[811]: INFO : Ignition 2.14.0 Dec 13 02:18:41.954590 ignition[811]: INFO : Stage: mount Dec 13 02:18:41.954590 ignition[811]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:41.954590 ignition[811]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:18:41.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:41.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:41.962998 systemd[1]: Finished sysroot-boot.service. Dec 13 02:18:42.024740 ignition[811]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:18:42.024740 ignition[811]: INFO : mount: mount passed Dec 13 02:18:42.024740 ignition[811]: INFO : Ignition finished successfully Dec 13 02:18:42.095609 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (822) Dec 13 02:18:42.095650 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:18:42.095675 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:18:42.095697 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:18:42.095719 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:18:41.968998 systemd[1]: Finished ignition-mount.service. Dec 13 02:18:41.986819 systemd[1]: Starting ignition-files.service... Dec 13 02:18:42.021527 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:18:42.117591 ignition[841]: INFO : Ignition 2.14.0 Dec 13 02:18:42.117591 ignition[841]: INFO : Stage: files Dec 13 02:18:42.117591 ignition[841]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:42.117591 ignition[841]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:18:42.117591 ignition[841]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:18:42.117591 ignition[841]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:18:42.117591 ignition[841]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:18:42.117591 ignition[841]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:18:42.221573 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (844) Dec 13 02:18:42.079349 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:18:42.229562 ignition[841]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:18:42.229562 ignition[841]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:18:42.229562 ignition[841]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem163988500" Dec 13 02:18:42.229562 ignition[841]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem163988500": device or resource busy Dec 13 02:18:42.229562 ignition[841]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem163988500", trying btrfs: device or resource busy Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem163988500" Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem163988500" Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem163988500" Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem163988500" Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:18:42.229562 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(8): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:18:42.121041 unknown[841]: wrote ssh authorized keys file for user: core Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(8): op(9): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3456376489" Dec 13 02:18:42.496561 ignition[841]: CRITICAL : files: createFilesystemsFiles: createFiles: op(8): op(9): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3456376489": device or resource busy Dec 13 02:18:42.496561 ignition[841]: ERROR : files: createFilesystemsFiles: createFiles: op(8): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3456376489", trying btrfs: device or resource busy Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(8): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3456376489" Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(8): op(a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3456376489" Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(8): op(b): [started] unmounting "/mnt/oem3456376489" Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(8): op(b): [finished] unmounting "/mnt/oem3456376489" Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:18:42.496561 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:18:42.129608 systemd-networkd[695]: eth0: Gained IPv6LL Dec 13 02:18:42.743619 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:18:42.743619 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:18:42.743619 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1151808446" Dec 13 02:18:42.743619 ignition[841]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1151808446": device or resource busy Dec 13 02:18:42.743619 ignition[841]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1151808446", trying btrfs: device or resource busy Dec 13 02:18:42.743619 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1151808446" Dec 13 02:18:42.743619 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1151808446" Dec 13 02:18:42.743619 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1151808446" Dec 13 02:18:42.743619 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1151808446" Dec 13 02:18:42.743619 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:18:42.743619 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:18:42.743619 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:18:42.743619 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem838942374" Dec 13 02:18:42.743619 ignition[841]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem838942374": device or resource busy Dec 13 02:18:42.985578 ignition[841]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem838942374", trying btrfs: device or resource busy Dec 13 02:18:42.985578 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem838942374" Dec 13 02:18:42.985578 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem838942374" Dec 13 02:18:42.985578 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem838942374" Dec 13 02:18:42.985578 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem838942374" Dec 13 02:18:42.985578 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:18:42.985578 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:18:42.985578 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:18:42.985578 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Dec 13 02:18:43.137616 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(18): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(18): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(19): [started] processing unit "oem-gce.service" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(19): [finished] processing unit "oem-gce.service" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1a): [started] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1a): [finished] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1b): [started] processing unit "containerd.service" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1b): op(1c): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1b): op(1c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1b): [finished] processing unit "containerd.service" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1d): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1d): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1e): [started] setting preset to enabled for "oem-gce.service" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1e): [finished] setting preset to enabled for "oem-gce.service" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1f): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:18:43.137616 ignition[841]: INFO : files: op(1f): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:18:43.597608 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 13 02:18:43.597681 kernel: audit: type=1130 audit(1734056323.154:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.597707 kernel: audit: type=1130 audit(1734056323.243:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.597732 kernel: audit: type=1130 audit(1734056323.294:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.597753 kernel: audit: type=1131 audit(1734056323.294:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.597776 kernel: audit: type=1130 audit(1734056323.409:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.597791 kernel: audit: type=1131 audit(1734056323.409:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.597806 kernel: audit: type=1130 audit(1734056323.562:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.143932 systemd[1]: Finished ignition-files.service. Dec 13 02:18:43.611627 ignition[841]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:18:43.611627 ignition[841]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:18:43.611627 ignition[841]: INFO : files: files passed Dec 13 02:18:43.611627 ignition[841]: INFO : Ignition finished successfully Dec 13 02:18:43.165411 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:18:43.198791 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:18:43.703596 initrd-setup-root-after-ignition[864]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:18:43.745604 kernel: audit: type=1131 audit(1734056323.710:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.199876 systemd[1]: Starting ignition-quench.service... Dec 13 02:18:43.218087 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:18:43.245031 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:18:43.245167 systemd[1]: Finished ignition-quench.service. Dec 13 02:18:43.295973 systemd[1]: Reached target ignition-complete.target. Dec 13 02:18:43.366751 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:18:43.399947 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:18:43.400065 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:18:43.410999 systemd[1]: Reached target initrd-fs.target. Dec 13 02:18:43.486786 systemd[1]: Reached target initrd.target. Dec 13 02:18:43.507808 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:18:43.509168 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:18:43.944763 kernel: audit: type=1131 audit(1734056323.915:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.541935 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:18:43.565077 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:18:43.994725 kernel: audit: type=1131 audit(1734056323.966:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.613728 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:18:44.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.647934 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:18:44.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.669957 systemd[1]: Stopped target timers.target. Dec 13 02:18:44.041282 ignition[879]: INFO : Ignition 2.14.0 Dec 13 02:18:44.041282 ignition[879]: INFO : Stage: umount Dec 13 02:18:44.041282 ignition[879]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:44.041282 ignition[879]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:18:44.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.686875 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:18:44.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.116819 ignition[879]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:18:44.116819 ignition[879]: INFO : umount: umount passed Dec 13 02:18:44.116819 ignition[879]: INFO : Ignition finished successfully Dec 13 02:18:44.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.687060 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:18:44.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.712146 systemd[1]: Stopped target initrd.target. Dec 13 02:18:44.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.752889 systemd[1]: Stopped target basic.target. Dec 13 02:18:44.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.766954 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:18:44.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.782904 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:18:44.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.798869 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:18:43.813900 systemd[1]: Stopped target remote-fs.target. Dec 13 02:18:43.829863 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:18:43.845874 systemd[1]: Stopped target sysinit.target. Dec 13 02:18:43.854903 systemd[1]: Stopped target local-fs.target. Dec 13 02:18:43.866893 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:18:43.878875 systemd[1]: Stopped target swap.target. Dec 13 02:18:44.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.891841 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:18:44.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.892032 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:18:43.916958 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:18:43.952738 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:18:44.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.952973 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:18:44.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:43.967917 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:18:44.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.425000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:18:43.968131 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:18:44.004836 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:18:44.005013 systemd[1]: Stopped ignition-files.service. Dec 13 02:18:44.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.021170 systemd[1]: Stopping ignition-mount.service... Dec 13 02:18:44.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.036116 systemd[1]: Stopping iscsiuio.service... Dec 13 02:18:44.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.048771 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:18:44.048988 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:18:44.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.061362 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:18:44.076767 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:18:44.077051 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:18:44.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.108879 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:18:44.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.109056 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:18:44.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.128650 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:18:44.129785 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:18:44.129899 systemd[1]: Stopped iscsiuio.service. Dec 13 02:18:44.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.143311 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:18:44.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.143416 systemd[1]: Stopped ignition-mount.service. Dec 13 02:18:44.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:44.158188 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:18:44.158292 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:18:44.722000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:18:44.722000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:18:44.173300 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:18:44.723000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:18:44.723000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:18:44.723000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:18:44.173462 systemd[1]: Stopped ignition-disks.service. Dec 13 02:18:44.188802 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:18:44.756591 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Dec 13 02:18:44.756654 iscsid[705]: iscsid shutting down. Dec 13 02:18:44.188867 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:18:44.203665 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:18:44.203742 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:18:44.218644 systemd[1]: Stopped target network.target. Dec 13 02:18:44.218716 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:18:44.218775 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:18:44.240648 systemd[1]: Stopped target paths.target. Dec 13 02:18:44.240704 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:18:44.245518 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:18:44.254744 systemd[1]: Stopped target slices.target. Dec 13 02:18:44.269710 systemd[1]: Stopped target sockets.target. Dec 13 02:18:44.281735 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:18:44.281769 systemd[1]: Closed iscsid.socket. Dec 13 02:18:44.307736 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:18:44.307798 systemd[1]: Closed iscsiuio.socket. Dec 13 02:18:44.314796 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:18:44.314868 systemd[1]: Stopped ignition-setup.service. Dec 13 02:18:44.327830 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:18:44.327890 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:18:44.348984 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:18:44.352534 systemd-networkd[695]: eth0: DHCPv6 lease lost Dec 13 02:18:44.763000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:18:44.364033 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:18:44.380172 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:18:44.380291 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:18:44.395281 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:18:44.395406 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:18:44.412298 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:18:44.412402 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:18:44.427666 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:18:44.427707 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:18:44.446670 systemd[1]: Stopping network-cleanup.service... Dec 13 02:18:44.453725 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:18:44.453805 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:18:44.475817 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:18:44.475887 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:18:44.490885 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:18:44.490948 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:18:44.505874 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:18:44.521302 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:18:44.522019 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:18:44.522165 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:18:44.531317 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:18:44.531400 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:18:44.550637 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:18:44.550704 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:18:44.557911 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:18:44.557973 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:18:44.580843 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:18:44.580912 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:18:44.596794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:18:44.596863 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:18:44.615855 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:18:44.638561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:18:44.638683 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:18:44.655184 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:18:44.655312 systemd[1]: Stopped network-cleanup.service. Dec 13 02:18:44.670006 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:18:44.670128 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:18:44.685943 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:18:44.702713 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:18:44.719895 systemd[1]: Switching root. Dec 13 02:18:44.767100 systemd-journald[190]: Journal stopped Dec 13 02:18:49.322424 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:18:49.322552 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:18:49.322578 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:18:49.322607 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:18:49.322636 kernel: SELinux: policy capability open_perms=1 Dec 13 02:18:49.322663 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:18:49.322692 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:18:49.322714 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:18:49.322738 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:18:49.322766 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:18:49.322789 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:18:49.322814 systemd[1]: Successfully loaded SELinux policy in 108.478ms. Dec 13 02:18:49.322851 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.202ms. Dec 13 02:18:49.322876 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:18:49.322902 systemd[1]: Detected virtualization kvm. Dec 13 02:18:49.322925 systemd[1]: Detected architecture x86-64. Dec 13 02:18:49.322952 systemd[1]: Detected first boot. Dec 13 02:18:49.322977 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:18:49.323001 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:18:49.323024 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:18:49.323049 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:18:49.323079 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:18:49.323105 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:18:49.323130 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:18:49.323158 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:18:49.323182 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:18:49.323206 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:18:49.323230 systemd[1]: Created slice system-getty.slice. Dec 13 02:18:49.323253 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:18:49.323279 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:18:49.323303 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:18:49.323327 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:18:49.323358 systemd[1]: Created slice user.slice. Dec 13 02:18:49.323391 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:18:49.323415 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:18:49.323438 systemd[1]: Set up automount boot.automount. Dec 13 02:18:49.323475 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:18:49.323499 systemd[1]: Reached target integritysetup.target. Dec 13 02:18:49.323523 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:18:49.323547 systemd[1]: Reached target remote-fs.target. Dec 13 02:18:49.323571 systemd[1]: Reached target slices.target. Dec 13 02:18:49.323598 systemd[1]: Reached target swap.target. Dec 13 02:18:49.323622 systemd[1]: Reached target torcx.target. Dec 13 02:18:49.323646 systemd[1]: Reached target veritysetup.target. Dec 13 02:18:49.323681 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:18:49.323704 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:18:49.323728 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 13 02:18:49.323751 kernel: audit: type=1400 audit(1734056328.864:87): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:18:49.323774 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:18:49.323797 kernel: audit: type=1335 audit(1734056328.864:88): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:18:49.323825 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:18:49.323849 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:18:49.323873 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:18:49.323896 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:18:49.323920 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:18:49.323943 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:18:49.323966 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:18:49.323990 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:18:49.324014 systemd[1]: Mounting media.mount... Dec 13 02:18:49.324041 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:49.324065 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:18:49.324089 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:18:49.324112 systemd[1]: Mounting tmp.mount... Dec 13 02:18:49.324135 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:18:49.324159 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:18:49.324183 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:18:49.324207 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:18:49.324230 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:18:49.324256 systemd[1]: Starting modprobe@drm.service... Dec 13 02:18:49.324280 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:18:49.324304 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:18:49.324329 systemd[1]: Starting modprobe@loop.service... Dec 13 02:18:49.324353 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:18:49.324376 kernel: fuse: init (API version 7.34) Dec 13 02:18:49.324406 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 02:18:49.324429 kernel: loop: module loaded Dec 13 02:18:49.324471 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 02:18:49.324498 systemd[1]: Starting systemd-journald.service... Dec 13 02:18:49.324521 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:18:49.324545 kernel: audit: type=1305 audit(1734056329.318:89): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:18:49.324573 systemd-journald[1042]: Journal started Dec 13 02:18:49.324661 systemd-journald[1042]: Runtime Journal (/run/log/journal/4e9a55f4bfea6b15f643b5fb4065a0cf) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:18:48.864000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:18:48.864000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:18:49.318000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:18:49.318000 audit[1042]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff04398b70 a2=4000 a3=7fff04398c0c items=0 ppid=1 pid=1042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.373845 kernel: audit: type=1300 audit(1734056329.318:89): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff04398b70 a2=4000 a3=7fff04398c0c items=0 ppid=1 pid=1042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.373938 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:18:49.373975 kernel: audit: type=1327 audit(1734056329.318:89): proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:18:49.318000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:18:49.401504 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:18:49.416480 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:18:49.436184 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:49.444485 systemd[1]: Started systemd-journald.service. Dec 13 02:18:49.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.454706 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:18:49.475491 kernel: audit: type=1130 audit(1734056329.451:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.481744 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:18:49.488752 systemd[1]: Mounted media.mount. Dec 13 02:18:49.495711 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:18:49.504721 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:18:49.513718 systemd[1]: Mounted tmp.mount. Dec 13 02:18:49.521030 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:18:49.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.530105 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:18:49.552478 kernel: audit: type=1130 audit(1734056329.528:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.560021 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:18:49.560287 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:18:49.582474 kernel: audit: type=1130 audit(1734056329.558:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.591077 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:18:49.591347 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:18:49.635427 kernel: audit: type=1130 audit(1734056329.589:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.635672 kernel: audit: type=1131 audit(1734056329.589:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.644093 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:18:49.644353 systemd[1]: Finished modprobe@drm.service. Dec 13 02:18:49.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.652966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:18:49.653202 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:18:49.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.661945 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:18:49.662175 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:18:49.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.670934 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:18:49.671237 systemd[1]: Finished modprobe@loop.service. Dec 13 02:18:49.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.680017 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:18:49.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.690007 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:18:49.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.698942 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:18:49.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.707936 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:18:49.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.717058 systemd[1]: Reached target network-pre.target. Dec 13 02:18:49.726952 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:18:49.736887 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:18:49.743586 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:18:49.746413 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:18:49.755015 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:18:49.766539 systemd-journald[1042]: Time spent on flushing to /var/log/journal/4e9a55f4bfea6b15f643b5fb4065a0cf is 54.354ms for 1072 entries. Dec 13 02:18:49.766539 systemd-journald[1042]: System Journal (/var/log/journal/4e9a55f4bfea6b15f643b5fb4065a0cf) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:18:49.875526 systemd-journald[1042]: Received client request to flush runtime journal. Dec 13 02:18:49.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.763597 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:18:49.765369 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:18:49.780635 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:18:49.782580 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:18:49.791600 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:18:49.878112 udevadm[1064]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:18:49.800217 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:18:49.811520 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:18:49.819709 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:18:49.828028 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:18:49.840947 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:18:49.850345 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:18:49.876886 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:18:49.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.886469 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:18:49.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.896957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:18:49.955429 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:18:49.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:50.464415 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:18:50.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:50.475380 systemd[1]: Starting systemd-udevd.service... Dec 13 02:18:50.500164 systemd-udevd[1074]: Using default interface naming scheme 'v252'. Dec 13 02:18:50.547858 systemd[1]: Started systemd-udevd.service. Dec 13 02:18:50.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:50.561079 systemd[1]: Starting systemd-networkd.service... Dec 13 02:18:50.577074 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:18:50.611115 systemd[1]: Found device dev-ttyS0.device. Dec 13 02:18:50.686325 systemd[1]: Started systemd-userdbd.service. Dec 13 02:18:50.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:50.737475 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:18:50.788473 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:18:50.743000 audit[1079]: AVC avc: denied { confidentiality } for pid=1079 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:18:50.743000 audit[1079]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f4425c5710 a1=337fc a2=7f266e6ddbc5 a3=5 items=110 ppid=1074 pid=1079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:50.743000 audit: CWD cwd="/" Dec 13 02:18:50.743000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=1 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=2 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=3 name=(null) inode=14705 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=4 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=5 name=(null) inode=14706 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=6 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=7 name=(null) inode=14707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=8 name=(null) inode=14707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=9 name=(null) inode=14708 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=10 name=(null) inode=14707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=11 name=(null) inode=14709 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=12 name=(null) inode=14707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=13 name=(null) inode=14710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=14 name=(null) inode=14707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=15 name=(null) inode=14711 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=16 name=(null) inode=14707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=17 name=(null) inode=14712 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=18 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=19 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=20 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.809490 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 02:18:50.743000 audit: PATH item=21 name=(null) inode=14714 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=22 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=23 name=(null) inode=14715 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=24 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=25 name=(null) inode=14716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=26 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=27 name=(null) inode=14717 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=28 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=29 name=(null) inode=14718 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=30 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=31 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=32 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=33 name=(null) inode=14720 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=34 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=35 name=(null) inode=14721 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=36 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=37 name=(null) inode=14722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=38 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=39 name=(null) inode=14723 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=40 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=41 name=(null) inode=14724 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=42 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=43 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=44 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=45 name=(null) inode=14726 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=46 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=47 name=(null) inode=14727 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=48 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=49 name=(null) inode=14728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=50 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=51 name=(null) inode=14729 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=52 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=53 name=(null) inode=14730 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=55 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=56 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=57 name=(null) inode=14732 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=58 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=59 name=(null) inode=14733 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=60 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=61 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=62 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=63 name=(null) inode=14735 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=64 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=65 name=(null) inode=14736 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=66 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=67 name=(null) inode=14737 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=68 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=69 name=(null) inode=14738 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=70 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=71 name=(null) inode=14739 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=72 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=73 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=74 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=75 name=(null) inode=14741 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=76 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=77 name=(null) inode=14742 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=78 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=79 name=(null) inode=14743 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=80 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=81 name=(null) inode=14744 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=82 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=83 name=(null) inode=14745 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=84 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=85 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=86 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=87 name=(null) inode=14747 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=88 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=89 name=(null) inode=14748 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=90 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=91 name=(null) inode=14749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=92 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=93 name=(null) inode=14750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=94 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=95 name=(null) inode=14751 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=96 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=97 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=98 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=99 name=(null) inode=14753 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=100 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=101 name=(null) inode=14754 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=102 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=103 name=(null) inode=14755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=104 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=105 name=(null) inode=14756 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=106 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=107 name=(null) inode=14757 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PATH item=109 name=(null) inode=14758 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:50.743000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:18:50.826589 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:18:50.842469 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 02:18:50.860926 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:18:50.864826 systemd-networkd[1088]: lo: Link UP Dec 13 02:18:50.865265 systemd-networkd[1088]: lo: Gained carrier Dec 13 02:18:50.866122 systemd-networkd[1088]: Enumeration completed Dec 13 02:18:50.866464 systemd[1]: Started systemd-networkd.service. Dec 13 02:18:50.867745 systemd-networkd[1088]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:18:50.870138 systemd-networkd[1088]: eth0: Link UP Dec 13 02:18:50.870166 systemd-networkd[1088]: eth0: Gained carrier Dec 13 02:18:50.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:50.880678 systemd-networkd[1088]: eth0: DHCPv4 address 10.128.0.79/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:18:50.906470 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:18:50.913517 kernel: EDAC MC: Ver: 3.0.0 Dec 13 02:18:50.961473 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1094) Dec 13 02:18:50.986625 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 02:18:50.987161 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:18:50.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.002237 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:18:51.031259 lvm[1112]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:18:51.063035 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:18:51.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.071902 systemd[1]: Reached target cryptsetup.target. Dec 13 02:18:51.082122 systemd[1]: Starting lvm2-activation.service... Dec 13 02:18:51.088184 lvm[1114]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:18:51.114815 systemd[1]: Finished lvm2-activation.service. Dec 13 02:18:51.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.123869 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:18:51.132563 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:18:51.132614 systemd[1]: Reached target local-fs.target. Dec 13 02:18:51.141591 systemd[1]: Reached target machines.target. Dec 13 02:18:51.151262 systemd[1]: Starting ldconfig.service... Dec 13 02:18:51.159393 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:18:51.159499 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:51.161229 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:18:51.171214 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:18:51.182957 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:18:51.186117 systemd[1]: Starting systemd-sysext.service... Dec 13 02:18:51.186754 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1117 (bootctl) Dec 13 02:18:51.189380 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:18:51.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.207927 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:18:51.218259 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:18:51.227377 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:18:51.227798 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:18:51.254500 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:18:51.352123 systemd-fsck[1129]: fsck.fat 4.2 (2021-01-31) Dec 13 02:18:51.352123 systemd-fsck[1129]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 02:18:51.356389 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:18:51.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.368900 systemd[1]: Mounting boot.mount... Dec 13 02:18:51.389780 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:18:51.390958 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:18:51.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.402869 systemd[1]: Mounted boot.mount. Dec 13 02:18:51.430477 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:18:51.431398 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:18:51.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.459488 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:18:51.479110 (sd-sysext)[1139]: Using extensions 'kubernetes'. Dec 13 02:18:51.482013 (sd-sysext)[1139]: Merged extensions into '/usr'. Dec 13 02:18:51.511578 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:51.514222 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:18:51.519969 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:18:51.521988 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:18:51.531908 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:18:51.541476 systemd[1]: Starting modprobe@loop.service... Dec 13 02:18:51.549741 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:18:51.549998 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:51.550212 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:51.555532 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:18:51.563379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:18:51.563693 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:18:51.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.573347 systemd[1]: Finished systemd-sysext.service. Dec 13 02:18:51.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.582071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:18:51.582330 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:18:51.591025 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:18:51.591315 systemd[1]: Finished modprobe@loop.service. Dec 13 02:18:51.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.603331 systemd[1]: Starting ensure-sysext.service... Dec 13 02:18:51.610605 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:18:51.610702 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:18:51.612777 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:18:51.623518 systemd[1]: Reloading. Dec 13 02:18:51.637557 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:18:51.643893 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:18:51.647351 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:18:51.733135 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2024-12-13T02:18:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:18:51.733182 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2024-12-13T02:18:51Z" level=info msg="torcx already run" Dec 13 02:18:51.855859 ldconfig[1116]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:18:51.952370 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:18:51.952667 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:18:51.983850 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:18:52.067495 systemd[1]: Finished ldconfig.service. Dec 13 02:18:52.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.076622 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:18:52.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.090897 systemd[1]: Starting audit-rules.service... Dec 13 02:18:52.099423 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:18:52.109825 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:18:52.118176 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:18:52.129799 systemd[1]: Starting systemd-resolved.service... Dec 13 02:18:52.139926 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:18:52.149168 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:18:52.158662 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:18:52.162000 audit[1250]: SYSTEM_BOOT pid=1250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.168664 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:18:52.169178 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:18:52.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.187430 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.190917 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:18:52.199547 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:18:52.207000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:18:52.207000 audit[1257]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc5023df90 a2=420 a3=0 items=0 ppid=1225 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:52.207000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:18:52.208695 systemd[1]: Starting modprobe@loop.service... Dec 13 02:18:52.209960 augenrules[1257]: No rules Dec 13 02:18:52.217687 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:18:52.225666 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.225925 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:52.226160 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:18:52.232189 systemd[1]: Finished audit-rules.service. Dec 13 02:18:52.234725 enable-oslogin[1268]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:18:52.240349 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:18:52.252687 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:18:52.261273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:18:52.261553 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:18:52.271270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:18:52.271555 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:18:52.281252 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:18:52.281534 systemd[1]: Finished modprobe@loop.service. Dec 13 02:18:52.290217 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:18:52.290580 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:18:52.300640 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:52.300820 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:18:52.301008 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.303730 systemd[1]: Starting systemd-update-done.service... Dec 13 02:18:52.310557 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:52.313969 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:52.314431 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.316836 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:18:52.325676 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:18:52.334674 systemd[1]: Starting modprobe@loop.service... Dec 13 02:18:52.343934 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:18:52.352624 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.352881 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:52.353091 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:18:52.353254 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:52.355781 systemd[1]: Finished systemd-update-done.service. Dec 13 02:18:52.361381 enable-oslogin[1282]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:18:52.365345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:18:52.365613 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:18:52.374319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:18:52.374591 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:18:52.384286 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:18:52.384558 systemd[1]: Finished modprobe@loop.service. Dec 13 02:18:52.393383 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:18:52.393766 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:18:52.397577 systemd-timesyncd[1244]: Contacted time server 169.254.169.254:123 (169.254.169.254). Dec 13 02:18:52.398129 systemd-timesyncd[1244]: Initial clock synchronization to Fri 2024-12-13 02:18:52.450291 UTC. Dec 13 02:18:52.403229 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:18:52.405982 systemd-resolved[1240]: Positive Trust Anchors: Dec 13 02:18:52.406360 systemd-resolved[1240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:18:52.406501 systemd-resolved[1240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:18:52.413069 systemd[1]: Reached target time-set.target. Dec 13 02:18:52.421701 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:18:52.421875 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.425810 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:52.426321 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.428637 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:18:52.438163 systemd[1]: Starting modprobe@drm.service... Dec 13 02:18:52.448328 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:18:52.451568 systemd-resolved[1240]: Defaulting to hostname 'linux'. Dec 13 02:18:52.458580 systemd[1]: Starting modprobe@loop.service... Dec 13 02:18:52.467218 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:18:52.471331 enable-oslogin[1294]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:18:52.475705 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.475947 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:52.477908 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:18:52.486606 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:18:52.486848 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:52.489033 systemd[1]: Started systemd-resolved.service. Dec 13 02:18:52.498364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:18:52.498650 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:18:52.507158 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:18:52.507396 systemd[1]: Finished modprobe@drm.service. Dec 13 02:18:52.516152 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:18:52.516419 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:18:52.525151 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:18:52.525468 systemd[1]: Finished modprobe@loop.service. Dec 13 02:18:52.534154 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:18:52.534490 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:18:52.544385 systemd[1]: Reached target network.target. Dec 13 02:18:52.552639 systemd[1]: Reached target nss-lookup.target. Dec 13 02:18:52.560634 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:18:52.560696 systemd[1]: Reached target sysinit.target. Dec 13 02:18:52.569719 systemd[1]: Started motdgen.path. Dec 13 02:18:52.576671 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:18:52.586792 systemd[1]: Started logrotate.timer. Dec 13 02:18:52.593728 systemd[1]: Started mdadm.timer. Dec 13 02:18:52.600592 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:18:52.608592 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:18:52.608644 systemd[1]: Reached target paths.target. Dec 13 02:18:52.615575 systemd[1]: Reached target timers.target. Dec 13 02:18:52.623269 systemd[1]: Listening on dbus.socket. Dec 13 02:18:52.625601 systemd-networkd[1088]: eth0: Gained IPv6LL Dec 13 02:18:52.633593 systemd[1]: Starting docker.socket... Dec 13 02:18:52.642822 systemd[1]: Listening on sshd.socket. Dec 13 02:18:52.649686 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:52.649774 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.650760 systemd[1]: Finished ensure-sysext.service. Dec 13 02:18:52.659953 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:18:52.669718 systemd[1]: Listening on docker.socket. Dec 13 02:18:52.677626 systemd[1]: Reached target network-online.target. Dec 13 02:18:52.685572 systemd[1]: Reached target sockets.target. Dec 13 02:18:52.693540 systemd[1]: Reached target basic.target. Dec 13 02:18:52.700786 systemd[1]: System is tainted: cgroupsv1 Dec 13 02:18:52.700864 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.700904 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.702536 systemd[1]: Starting containerd.service... Dec 13 02:18:52.711200 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:18:52.722280 systemd[1]: Starting dbus.service... Dec 13 02:18:52.729370 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:18:52.738486 systemd[1]: Starting extend-filesystems.service... Dec 13 02:18:52.747948 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:18:52.749155 jq[1306]: false Dec 13 02:18:52.750345 systemd[1]: Starting kubelet.service... Dec 13 02:18:52.760414 systemd[1]: Starting motdgen.service... Dec 13 02:18:52.767328 systemd[1]: Starting oem-gce.service... Dec 13 02:18:52.776491 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:18:52.785809 systemd[1]: Starting sshd-keygen.service... Dec 13 02:18:52.796317 systemd[1]: Starting systemd-logind.service... Dec 13 02:18:52.803592 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:52.803716 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 02:18:52.805670 systemd[1]: Starting update-engine.service... Dec 13 02:18:52.815549 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:18:52.821350 jq[1325]: true Dec 13 02:18:52.826951 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:18:52.827605 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:18:52.830297 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:18:52.832736 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:18:52.866474 mkfs.ext4[1335]: mke2fs 1.46.5 (30-Dec-2021) Dec 13 02:18:52.873635 mkfs.ext4[1335]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Dec 13 02:18:52.873764 mkfs.ext4[1335]: Creating filesystem with 262144 4k blocks and 65536 inodes Dec 13 02:18:52.873764 mkfs.ext4[1335]: Filesystem UUID: de2fc2dd-46a0-4ef1-a7b7-d6325ee9a63a Dec 13 02:18:52.873764 mkfs.ext4[1335]: Superblock backups stored on blocks: Dec 13 02:18:52.873764 mkfs.ext4[1335]: 32768, 98304, 163840, 229376 Dec 13 02:18:52.873764 mkfs.ext4[1335]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:18:52.874007 mkfs.ext4[1335]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:18:52.874631 mkfs.ext4[1335]: Creating journal (8192 blocks): done Dec 13 02:18:52.885301 mkfs.ext4[1335]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:18:52.906020 jq[1333]: true Dec 13 02:18:52.964125 extend-filesystems[1307]: Found loop1 Dec 13 02:18:52.978027 umount[1354]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Dec 13 02:18:52.972413 systemd[1]: Started dbus.service. Dec 13 02:18:52.972133 dbus-daemon[1305]: [system] SELinux support is enabled Dec 13 02:18:52.981468 extend-filesystems[1307]: Found sda Dec 13 02:18:52.991607 extend-filesystems[1307]: Found sda1 Dec 13 02:18:52.991607 extend-filesystems[1307]: Found sda2 Dec 13 02:18:52.991607 extend-filesystems[1307]: Found sda3 Dec 13 02:18:52.991607 extend-filesystems[1307]: Found usr Dec 13 02:18:52.991607 extend-filesystems[1307]: Found sda4 Dec 13 02:18:52.991607 extend-filesystems[1307]: Found sda6 Dec 13 02:18:52.991607 extend-filesystems[1307]: Found sda7 Dec 13 02:18:52.991607 extend-filesystems[1307]: Found sda9 Dec 13 02:18:52.991607 extend-filesystems[1307]: Checking size of /dev/sda9 Dec 13 02:18:53.174987 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:18:53.175089 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 02:18:53.175133 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 02:18:52.989748 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:18:53.002620 dbus-daemon[1305]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1088 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:18:53.175507 extend-filesystems[1307]: Resized partition /dev/sda9 Dec 13 02:18:53.200045 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:18:53.200119 update_engine[1324]: I1213 02:18:53.039057 1324 main.cc:92] Flatcar Update Engine starting Dec 13 02:18:53.200119 update_engine[1324]: I1213 02:18:53.055161 1324 update_check_scheduler.cc:74] Next update check in 11m50s Dec 13 02:18:52.989804 systemd[1]: Reached target system-config.target. Dec 13 02:18:53.027289 dbus-daemon[1305]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:18:53.200784 extend-filesystems[1365]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:18:53.200784 extend-filesystems[1365]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:18:53.200784 extend-filesystems[1365]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 02:18:53.200784 extend-filesystems[1365]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 02:18:53.270947 env[1334]: time="2024-12-13T02:18:53.177736572Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:18:53.009671 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:18:53.271856 extend-filesystems[1307]: Resized filesystem in /dev/sda9 Dec 13 02:18:53.280691 bash[1380]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:18:53.009713 systemd[1]: Reached target user-config.target. Dec 13 02:18:53.019206 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:18:53.019679 systemd[1]: Finished motdgen.service. Dec 13 02:18:53.033631 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:18:53.054985 systemd[1]: Started update-engine.service. Dec 13 02:18:53.084697 systemd[1]: Started locksmithd.service. Dec 13 02:18:53.193823 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:18:53.194227 systemd[1]: Finished extend-filesystems.service. Dec 13 02:18:53.210494 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:18:53.295314 dbus-daemon[1305]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:18:53.296576 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:18:53.298485 dbus-daemon[1305]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1363 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:18:53.306791 systemd[1]: Starting polkit.service... Dec 13 02:18:53.337560 coreos-metadata[1304]: Dec 13 02:18:53.337 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 02:18:53.341437 coreos-metadata[1304]: Dec 13 02:18:53.341 INFO Fetch failed with 404: resource not found Dec 13 02:18:53.341437 coreos-metadata[1304]: Dec 13 02:18:53.341 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 02:18:53.342374 coreos-metadata[1304]: Dec 13 02:18:53.342 INFO Fetch successful Dec 13 02:18:53.342374 coreos-metadata[1304]: Dec 13 02:18:53.342 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 02:18:53.343214 coreos-metadata[1304]: Dec 13 02:18:53.343 INFO Fetch failed with 404: resource not found Dec 13 02:18:53.343214 coreos-metadata[1304]: Dec 13 02:18:53.343 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 02:18:53.343931 coreos-metadata[1304]: Dec 13 02:18:53.343 INFO Fetch failed with 404: resource not found Dec 13 02:18:53.343931 coreos-metadata[1304]: Dec 13 02:18:53.343 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 02:18:53.345497 coreos-metadata[1304]: Dec 13 02:18:53.345 INFO Fetch successful Dec 13 02:18:53.354733 unknown[1304]: wrote ssh authorized keys file for user: core Dec 13 02:18:53.400850 update-ssh-keys[1394]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:18:53.402141 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:18:53.418567 systemd-logind[1321]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:18:53.419169 systemd-logind[1321]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:18:53.419304 systemd-logind[1321]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:18:53.420109 systemd-logind[1321]: New seat seat0. Dec 13 02:18:53.443473 systemd[1]: Started systemd-logind.service. Dec 13 02:18:53.463266 polkitd[1392]: Started polkitd version 121 Dec 13 02:18:53.489240 polkitd[1392]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:18:53.489331 polkitd[1392]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:18:53.491660 env[1334]: time="2024-12-13T02:18:53.491558750Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:18:53.491776 env[1334]: time="2024-12-13T02:18:53.491721426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:53.493779 polkitd[1392]: Finished loading, compiling and executing 2 rules Dec 13 02:18:53.494360 dbus-daemon[1305]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:18:53.494624 systemd[1]: Started polkit.service. Dec 13 02:18:53.494847 polkitd[1392]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:18:53.500745 env[1334]: time="2024-12-13T02:18:53.500694196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:18:53.500745 env[1334]: time="2024-12-13T02:18:53.500744272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:53.501205 env[1334]: time="2024-12-13T02:18:53.501166576Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:18:53.501294 env[1334]: time="2024-12-13T02:18:53.501208431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:53.501294 env[1334]: time="2024-12-13T02:18:53.501230971Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:18:53.501294 env[1334]: time="2024-12-13T02:18:53.501248370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:53.501429 env[1334]: time="2024-12-13T02:18:53.501365554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:53.502949 env[1334]: time="2024-12-13T02:18:53.502888477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:53.504618 env[1334]: time="2024-12-13T02:18:53.504578177Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:18:53.504703 env[1334]: time="2024-12-13T02:18:53.504618442Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:18:53.504757 env[1334]: time="2024-12-13T02:18:53.504708029Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:18:53.504757 env[1334]: time="2024-12-13T02:18:53.504731455Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:18:53.516886 env[1334]: time="2024-12-13T02:18:53.516691615Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:18:53.516886 env[1334]: time="2024-12-13T02:18:53.516759087Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:18:53.516886 env[1334]: time="2024-12-13T02:18:53.516783139Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:18:53.516886 env[1334]: time="2024-12-13T02:18:53.516858845Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:18:53.516886 env[1334]: time="2024-12-13T02:18:53.516885467Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:18:53.517176 env[1334]: time="2024-12-13T02:18:53.516964868Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:18:53.517176 env[1334]: time="2024-12-13T02:18:53.516988591Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:18:53.517176 env[1334]: time="2024-12-13T02:18:53.517055393Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:18:53.517176 env[1334]: time="2024-12-13T02:18:53.517079656Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:18:53.517176 env[1334]: time="2024-12-13T02:18:53.517102844Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:18:53.517176 env[1334]: time="2024-12-13T02:18:53.517146237Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:18:53.517176 env[1334]: time="2024-12-13T02:18:53.517169846Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:18:53.517520 env[1334]: time="2024-12-13T02:18:53.517378350Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:18:53.517592 env[1334]: time="2024-12-13T02:18:53.517564104Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:18:53.518534 env[1334]: time="2024-12-13T02:18:53.518501424Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:18:53.518628 env[1334]: time="2024-12-13T02:18:53.518568871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.518628 env[1334]: time="2024-12-13T02:18:53.518595812Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:18:53.518822 env[1334]: time="2024-12-13T02:18:53.518798327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.518886 env[1334]: time="2024-12-13T02:18:53.518831290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.518886 env[1334]: time="2024-12-13T02:18:53.518853308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.519003 env[1334]: time="2024-12-13T02:18:53.518893077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.519003 env[1334]: time="2024-12-13T02:18:53.518914885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.519003 env[1334]: time="2024-12-13T02:18:53.518936550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.519003 env[1334]: time="2024-12-13T02:18:53.518978643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.519003 env[1334]: time="2024-12-13T02:18:53.518999723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.519240 env[1334]: time="2024-12-13T02:18:53.519068479Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:18:53.520678 env[1334]: time="2024-12-13T02:18:53.519302840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.520678 env[1334]: time="2024-12-13T02:18:53.519336429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.520678 env[1334]: time="2024-12-13T02:18:53.519361466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.520678 env[1334]: time="2024-12-13T02:18:53.519398498Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:18:53.520678 env[1334]: time="2024-12-13T02:18:53.519424233Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:18:53.520678 env[1334]: time="2024-12-13T02:18:53.519470316Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:18:53.520678 env[1334]: time="2024-12-13T02:18:53.519503582Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:18:53.520678 env[1334]: time="2024-12-13T02:18:53.519568759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:18:53.521084 env[1334]: time="2024-12-13T02:18:53.519968342Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:18:53.521084 env[1334]: time="2024-12-13T02:18:53.520084682Z" level=info msg="Connect containerd service" Dec 13 02:18:53.521084 env[1334]: time="2024-12-13T02:18:53.520151655Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:18:53.525423 env[1334]: time="2024-12-13T02:18:53.521314324Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:18:53.525423 env[1334]: time="2024-12-13T02:18:53.521562630Z" level=info msg="Start subscribing containerd event" Dec 13 02:18:53.525423 env[1334]: time="2024-12-13T02:18:53.521642278Z" level=info msg="Start recovering state" Dec 13 02:18:53.525423 env[1334]: time="2024-12-13T02:18:53.521786459Z" level=info msg="Start event monitor" Dec 13 02:18:53.525423 env[1334]: time="2024-12-13T02:18:53.521809844Z" level=info msg="Start snapshots syncer" Dec 13 02:18:53.525423 env[1334]: time="2024-12-13T02:18:53.521823525Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:18:53.525423 env[1334]: time="2024-12-13T02:18:53.521835786Z" level=info msg="Start streaming server" Dec 13 02:18:53.525423 env[1334]: time="2024-12-13T02:18:53.523088878Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:18:53.528034 env[1334]: time="2024-12-13T02:18:53.527177562Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:18:53.527465 systemd[1]: Started containerd.service. Dec 13 02:18:53.530634 systemd-hostnamed[1363]: Hostname set to (transient) Dec 13 02:18:53.533915 systemd-resolved[1240]: System hostname changed to 'ci-3510-3-6-31ac578f16920e8dce3e.c.flatcar-212911.internal'. Dec 13 02:18:53.548004 env[1334]: time="2024-12-13T02:18:53.547956126Z" level=info msg="containerd successfully booted in 0.404837s" Dec 13 02:18:54.886469 systemd[1]: Started kubelet.service. Dec 13 02:18:55.004986 locksmithd[1376]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:18:56.404963 kubelet[1417]: E1213 02:18:56.404884 1417 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:18:56.408353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:18:56.408670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:18:57.596910 sshd_keygen[1346]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:18:57.650166 systemd[1]: Finished sshd-keygen.service. Dec 13 02:18:57.661099 systemd[1]: Starting issuegen.service... Dec 13 02:18:57.669864 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:18:57.670245 systemd[1]: Finished issuegen.service. Dec 13 02:18:57.680721 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:18:57.693885 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:18:57.705299 systemd[1]: Started getty@tty1.service. Dec 13 02:18:57.715065 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:18:57.724010 systemd[1]: Reached target getty.target. Dec 13 02:18:59.217685 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Dec 13 02:19:01.186587 systemd[1]: Created slice system-sshd.slice. Dec 13 02:19:01.197676 systemd[1]: Started sshd@0-10.128.0.79:22-139.178.68.195:33878.service. Dec 13 02:19:01.356507 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:19:01.375230 systemd-nspawn[1445]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Dec 13 02:19:01.375230 systemd-nspawn[1445]: Press ^] three times within 1s to kill container. Dec 13 02:19:01.389540 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:19:01.483684 systemd[1]: Started oem-gce.service. Dec 13 02:19:01.492077 systemd[1]: Reached target multi-user.target. Dec 13 02:19:01.502783 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:19:01.516306 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:19:01.516705 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:19:01.526809 systemd[1]: Startup finished in 8.212s (kernel) + 16.503s (userspace) = 24.715s. Dec 13 02:19:01.532243 systemd-nspawn[1445]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 02:19:01.532243 systemd-nspawn[1445]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 02:19:01.532243 systemd-nspawn[1445]: + /usr/bin/google_instance_setup Dec 13 02:19:01.535836 sshd[1442]: Accepted publickey for core from 139.178.68.195 port 33878 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:19:01.540076 sshd[1442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:01.557127 systemd[1]: Created slice user-500.slice. Dec 13 02:19:01.558924 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:19:01.564341 systemd-logind[1321]: New session 1 of user core. Dec 13 02:19:01.578030 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:19:01.582178 systemd[1]: Starting user@500.service... Dec 13 02:19:01.597825 (systemd)[1456]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:01.742184 systemd[1456]: Queued start job for default target default.target. Dec 13 02:19:01.742516 systemd[1456]: Reached target paths.target. Dec 13 02:19:01.742538 systemd[1456]: Reached target sockets.target. Dec 13 02:19:01.742554 systemd[1456]: Reached target timers.target. Dec 13 02:19:01.742567 systemd[1456]: Reached target basic.target. Dec 13 02:19:01.742636 systemd[1456]: Reached target default.target. Dec 13 02:19:01.742690 systemd[1456]: Startup finished in 131ms. Dec 13 02:19:01.742796 systemd[1]: Started user@500.service. Dec 13 02:19:01.744594 systemd[1]: Started session-1.scope. Dec 13 02:19:01.972821 systemd[1]: Started sshd@1-10.128.0.79:22-139.178.68.195:33882.service. Dec 13 02:19:02.241651 instance-setup[1452]: INFO Running google_set_multiqueue. Dec 13 02:19:02.258464 instance-setup[1452]: INFO Set channels for eth0 to 2. Dec 13 02:19:02.262537 instance-setup[1452]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Dec 13 02:19:02.263552 instance-setup[1452]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Dec 13 02:19:02.264039 instance-setup[1452]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Dec 13 02:19:02.265530 instance-setup[1452]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Dec 13 02:19:02.265861 instance-setup[1452]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Dec 13 02:19:02.267216 instance-setup[1452]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Dec 13 02:19:02.267813 instance-setup[1452]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Dec 13 02:19:02.269246 instance-setup[1452]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Dec 13 02:19:02.277174 sshd[1465]: Accepted publickey for core from 139.178.68.195 port 33882 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:19:02.279348 sshd[1465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:02.287484 systemd[1]: Started session-2.scope. Dec 13 02:19:02.288030 systemd-logind[1321]: New session 2 of user core. Dec 13 02:19:02.301057 instance-setup[1452]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 02:19:02.301247 instance-setup[1452]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 02:19:02.341702 systemd-nspawn[1445]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 02:19:02.497343 sshd[1465]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:02.505903 systemd-logind[1321]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:19:02.508392 systemd[1]: sshd@1-10.128.0.79:22-139.178.68.195:33882.service: Deactivated successfully. Dec 13 02:19:02.509652 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:19:02.511908 systemd-logind[1321]: Removed session 2. Dec 13 02:19:02.545680 systemd[1]: Started sshd@2-10.128.0.79:22-139.178.68.195:33894.service. Dec 13 02:19:02.691252 startup-script[1499]: INFO Starting startup scripts. Dec 13 02:19:02.703316 startup-script[1499]: INFO No startup scripts found in metadata. Dec 13 02:19:02.703494 startup-script[1499]: INFO Finished running startup scripts. Dec 13 02:19:02.733815 systemd-nspawn[1445]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 02:19:02.733815 systemd-nspawn[1445]: + daemon_pids=() Dec 13 02:19:02.734564 systemd-nspawn[1445]: + for d in accounts clock_skew network Dec 13 02:19:02.734564 systemd-nspawn[1445]: + daemon_pids+=($!) Dec 13 02:19:02.734564 systemd-nspawn[1445]: + for d in accounts clock_skew network Dec 13 02:19:02.734564 systemd-nspawn[1445]: + daemon_pids+=($!) Dec 13 02:19:02.734564 systemd-nspawn[1445]: + for d in accounts clock_skew network Dec 13 02:19:02.734831 systemd-nspawn[1445]: + daemon_pids+=($!) Dec 13 02:19:02.734831 systemd-nspawn[1445]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 02:19:02.734831 systemd-nspawn[1445]: + /usr/bin/systemd-notify --ready Dec 13 02:19:02.735249 systemd-nspawn[1445]: + /usr/bin/google_network_daemon Dec 13 02:19:02.735637 systemd-nspawn[1445]: + /usr/bin/google_clock_skew_daemon Dec 13 02:19:02.735868 systemd-nspawn[1445]: + /usr/bin/google_accounts_daemon Dec 13 02:19:02.781798 systemd-nspawn[1445]: + wait -n 36 37 38 Dec 13 02:19:02.859129 sshd[1503]: Accepted publickey for core from 139.178.68.195 port 33894 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:19:02.860748 sshd[1503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:02.869043 systemd[1]: Started session-3.scope. Dec 13 02:19:02.871068 systemd-logind[1321]: New session 3 of user core. Dec 13 02:19:03.072768 sshd[1503]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:03.077381 systemd[1]: sshd@2-10.128.0.79:22-139.178.68.195:33894.service: Deactivated successfully. Dec 13 02:19:03.078674 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:19:03.080868 systemd-logind[1321]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:19:03.082340 systemd-logind[1321]: Removed session 3. Dec 13 02:19:03.116279 systemd[1]: Started sshd@3-10.128.0.79:22-139.178.68.195:33902.service. Dec 13 02:19:03.426866 sshd[1516]: Accepted publickey for core from 139.178.68.195 port 33902 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:19:03.428399 sshd[1516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:03.436687 systemd[1]: Started session-4.scope. Dec 13 02:19:03.438854 systemd-logind[1321]: New session 4 of user core. Dec 13 02:19:03.464393 google-networking[1509]: INFO Starting Google Networking daemon. Dec 13 02:19:03.521484 google-clock-skew[1508]: INFO Starting Google Clock Skew daemon. Dec 13 02:19:03.534936 google-clock-skew[1508]: INFO Clock drift token has changed: 0. Dec 13 02:19:03.542823 systemd-nspawn[1445]: hwclock: Cannot access the Hardware Clock via any known method. Dec 13 02:19:03.543118 systemd-nspawn[1445]: hwclock: Use the --verbose option to see the details of our search for an access method. Dec 13 02:19:03.543957 google-clock-skew[1508]: WARNING Failed to sync system time with hardware clock. Dec 13 02:19:03.563517 groupadd[1528]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 02:19:03.567231 groupadd[1528]: group added to /etc/gshadow: name=google-sudoers Dec 13 02:19:03.570845 groupadd[1528]: new group: name=google-sudoers, GID=1000 Dec 13 02:19:03.583153 google-accounts[1507]: INFO Starting Google Accounts daemon. Dec 13 02:19:03.611084 google-accounts[1507]: WARNING OS Login not installed. Dec 13 02:19:03.612333 google-accounts[1507]: INFO Creating a new user account for 0. Dec 13 02:19:03.617804 systemd-nspawn[1445]: useradd: invalid user name '0': use --badname to ignore Dec 13 02:19:03.618416 google-accounts[1507]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 02:19:03.642824 sshd[1516]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:03.646794 systemd[1]: sshd@3-10.128.0.79:22-139.178.68.195:33902.service: Deactivated successfully. Dec 13 02:19:03.648265 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:19:03.649409 systemd-logind[1321]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:19:03.651399 systemd-logind[1321]: Removed session 4. Dec 13 02:19:03.687625 systemd[1]: Started sshd@4-10.128.0.79:22-139.178.68.195:33906.service. Dec 13 02:19:03.981882 sshd[1541]: Accepted publickey for core from 139.178.68.195 port 33906 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:19:03.983770 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:03.990544 systemd-logind[1321]: New session 5 of user core. Dec 13 02:19:03.990919 systemd[1]: Started session-5.scope. Dec 13 02:19:04.179539 sudo[1545]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:19:04.180027 sudo[1545]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:19:04.199530 systemd[1]: Starting coreos-metadata.service... Dec 13 02:19:04.248494 coreos-metadata[1549]: Dec 13 02:19:04.248 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 13 02:19:04.249994 coreos-metadata[1549]: Dec 13 02:19:04.249 INFO Fetch successful Dec 13 02:19:04.250311 coreos-metadata[1549]: Dec 13 02:19:04.250 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 13 02:19:04.251138 coreos-metadata[1549]: Dec 13 02:19:04.251 INFO Fetch successful Dec 13 02:19:04.251245 coreos-metadata[1549]: Dec 13 02:19:04.251 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 13 02:19:04.251875 coreos-metadata[1549]: Dec 13 02:19:04.251 INFO Fetch successful Dec 13 02:19:04.251964 coreos-metadata[1549]: Dec 13 02:19:04.251 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 13 02:19:04.252687 coreos-metadata[1549]: Dec 13 02:19:04.252 INFO Fetch successful Dec 13 02:19:04.267926 systemd[1]: Finished coreos-metadata.service. Dec 13 02:19:05.213089 systemd[1]: Stopped kubelet.service. Dec 13 02:19:05.216695 systemd[1]: Starting kubelet.service... Dec 13 02:19:05.245260 systemd[1]: Reloading. Dec 13 02:19:05.354373 /usr/lib/systemd/system-generators/torcx-generator[1610]: time="2024-12-13T02:19:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:19:05.354420 /usr/lib/systemd/system-generators/torcx-generator[1610]: time="2024-12-13T02:19:05Z" level=info msg="torcx already run" Dec 13 02:19:05.504258 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:19:05.504286 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:19:05.528164 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:19:05.648630 systemd[1]: Started kubelet.service. Dec 13 02:19:05.657167 systemd[1]: Stopping kubelet.service... Dec 13 02:19:05.658343 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:19:05.658764 systemd[1]: Stopped kubelet.service. Dec 13 02:19:05.663276 systemd[1]: Starting kubelet.service... Dec 13 02:19:05.865105 systemd[1]: Started kubelet.service. Dec 13 02:19:05.937514 kubelet[1675]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:19:05.937962 kubelet[1675]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:19:05.938026 kubelet[1675]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:19:05.938182 kubelet[1675]: I1213 02:19:05.938144 1675 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:19:06.206934 kubelet[1675]: I1213 02:19:06.206363 1675 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:19:06.206934 kubelet[1675]: I1213 02:19:06.206403 1675 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:19:06.206934 kubelet[1675]: I1213 02:19:06.206754 1675 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:19:06.256541 kubelet[1675]: I1213 02:19:06.256500 1675 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:19:06.270783 kubelet[1675]: I1213 02:19:06.270734 1675 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:19:06.272963 kubelet[1675]: I1213 02:19:06.272932 1675 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:19:06.273255 kubelet[1675]: I1213 02:19:06.273228 1675 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:19:06.274586 kubelet[1675]: I1213 02:19:06.274552 1675 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:19:06.274586 kubelet[1675]: I1213 02:19:06.274587 1675 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:19:06.276991 kubelet[1675]: I1213 02:19:06.276954 1675 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:19:06.277141 kubelet[1675]: I1213 02:19:06.277121 1675 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:19:06.277214 kubelet[1675]: I1213 02:19:06.277154 1675 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:19:06.277214 kubelet[1675]: I1213 02:19:06.277192 1675 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:19:06.277303 kubelet[1675]: I1213 02:19:06.277215 1675 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:19:06.277762 kubelet[1675]: E1213 02:19:06.277718 1675 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:06.277886 kubelet[1675]: E1213 02:19:06.277781 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:06.279742 kubelet[1675]: I1213 02:19:06.279718 1675 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:19:06.284672 kubelet[1675]: I1213 02:19:06.284644 1675 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:19:06.284775 kubelet[1675]: W1213 02:19:06.284757 1675 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:19:06.285539 kubelet[1675]: I1213 02:19:06.285513 1675 server.go:1256] "Started kubelet" Dec 13 02:19:06.291610 kubelet[1675]: I1213 02:19:06.291560 1675 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:19:06.292715 kubelet[1675]: I1213 02:19:06.292691 1675 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:19:06.293223 kubelet[1675]: I1213 02:19:06.293200 1675 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:19:06.293371 kubelet[1675]: I1213 02:19:06.292790 1675 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:19:06.305473 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:19:06.305569 kubelet[1675]: I1213 02:19:06.305370 1675 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:19:06.314726 kubelet[1675]: I1213 02:19:06.314683 1675 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:19:06.315232 kubelet[1675]: I1213 02:19:06.315193 1675 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:19:06.315351 kubelet[1675]: I1213 02:19:06.315330 1675 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:19:06.323588 kubelet[1675]: I1213 02:19:06.323562 1675 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:19:06.323780 kubelet[1675]: I1213 02:19:06.323689 1675 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:19:06.325122 kubelet[1675]: E1213 02:19:06.325080 1675 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.79\" not found" node="10.128.0.79" Dec 13 02:19:06.332148 kubelet[1675]: E1213 02:19:06.332103 1675 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:19:06.332742 kubelet[1675]: I1213 02:19:06.332723 1675 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:19:06.367431 kubelet[1675]: I1213 02:19:06.367405 1675 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:19:06.367646 kubelet[1675]: I1213 02:19:06.367631 1675 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:19:06.367772 kubelet[1675]: I1213 02:19:06.367759 1675 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:19:06.370245 kubelet[1675]: I1213 02:19:06.370221 1675 policy_none.go:49] "None policy: Start" Dec 13 02:19:06.371122 kubelet[1675]: I1213 02:19:06.371101 1675 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:19:06.371271 kubelet[1675]: I1213 02:19:06.371257 1675 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:19:06.380066 kubelet[1675]: I1213 02:19:06.380041 1675 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:19:06.382291 kubelet[1675]: I1213 02:19:06.382269 1675 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:19:06.386188 kubelet[1675]: E1213 02:19:06.386169 1675 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.79\" not found" Dec 13 02:19:06.416314 kubelet[1675]: I1213 02:19:06.416266 1675 kubelet_node_status.go:73] "Attempting to register node" node="10.128.0.79" Dec 13 02:19:06.421276 kubelet[1675]: I1213 02:19:06.421250 1675 kubelet_node_status.go:76] "Successfully registered node" node="10.128.0.79" Dec 13 02:19:06.477107 kubelet[1675]: I1213 02:19:06.474821 1675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:19:06.477539 kubelet[1675]: I1213 02:19:06.477513 1675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:19:06.477687 kubelet[1675]: I1213 02:19:06.477556 1675 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:19:06.477687 kubelet[1675]: I1213 02:19:06.477585 1675 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:19:06.477687 kubelet[1675]: E1213 02:19:06.477650 1675 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 02:19:06.537757 kubelet[1675]: I1213 02:19:06.537713 1675 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 02:19:06.538203 env[1334]: time="2024-12-13T02:19:06.538131593Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:19:06.538815 kubelet[1675]: I1213 02:19:06.538395 1675 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 02:19:07.209707 kubelet[1675]: I1213 02:19:07.209650 1675 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 02:19:07.210480 kubelet[1675]: W1213 02:19:07.209912 1675 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:19:07.210480 kubelet[1675]: W1213 02:19:07.210266 1675 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:19:07.210480 kubelet[1675]: W1213 02:19:07.210307 1675 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:19:07.278462 kubelet[1675]: E1213 02:19:07.278398 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:07.278462 kubelet[1675]: I1213 02:19:07.278400 1675 apiserver.go:52] "Watching apiserver" Dec 13 02:19:07.283011 kubelet[1675]: I1213 02:19:07.282974 1675 topology_manager.go:215] "Topology Admit Handler" podUID="75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" podNamespace="kube-system" podName="cilium-dt2g6" Dec 13 02:19:07.283400 kubelet[1675]: I1213 02:19:07.283374 1675 topology_manager.go:215] "Topology Admit Handler" podUID="993029a7-3847-4ac4-9555-bc512a54ca12" podNamespace="kube-system" podName="kube-proxy-dm8t9" Dec 13 02:19:07.316435 kubelet[1675]: I1213 02:19:07.316397 1675 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:19:07.319381 kubelet[1675]: I1213 02:19:07.319336 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/993029a7-3847-4ac4-9555-bc512a54ca12-kube-proxy\") pod \"kube-proxy-dm8t9\" (UID: \"993029a7-3847-4ac4-9555-bc512a54ca12\") " pod="kube-system/kube-proxy-dm8t9" Dec 13 02:19:07.319678 kubelet[1675]: I1213 02:19:07.319650 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-cgroup\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.319846 kubelet[1675]: I1213 02:19:07.319828 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-xtables-lock\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.320032 kubelet[1675]: I1213 02:19:07.319996 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-host-proc-sys-net\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.320234 kubelet[1675]: I1213 02:19:07.320215 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-hubble-tls\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.320397 kubelet[1675]: I1213 02:19:07.320374 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-run\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.320492 kubelet[1675]: I1213 02:19:07.320423 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-bpf-maps\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.320492 kubelet[1675]: I1213 02:19:07.320474 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cni-path\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.320612 kubelet[1675]: I1213 02:19:07.320518 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-etc-cni-netd\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.320612 kubelet[1675]: I1213 02:19:07.320561 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-config-path\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.320730 kubelet[1675]: I1213 02:19:07.320625 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj2p9\" (UniqueName: \"kubernetes.io/projected/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-kube-api-access-nj2p9\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.320730 kubelet[1675]: I1213 02:19:07.320665 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/993029a7-3847-4ac4-9555-bc512a54ca12-lib-modules\") pod \"kube-proxy-dm8t9\" (UID: \"993029a7-3847-4ac4-9555-bc512a54ca12\") " pod="kube-system/kube-proxy-dm8t9" Dec 13 02:19:07.320730 kubelet[1675]: I1213 02:19:07.320711 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66rt8\" (UniqueName: \"kubernetes.io/projected/993029a7-3847-4ac4-9555-bc512a54ca12-kube-api-access-66rt8\") pod \"kube-proxy-dm8t9\" (UID: \"993029a7-3847-4ac4-9555-bc512a54ca12\") " pod="kube-system/kube-proxy-dm8t9" Dec 13 02:19:07.320884 kubelet[1675]: I1213 02:19:07.320754 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-clustermesh-secrets\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.320884 kubelet[1675]: I1213 02:19:07.320790 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/993029a7-3847-4ac4-9555-bc512a54ca12-xtables-lock\") pod \"kube-proxy-dm8t9\" (UID: \"993029a7-3847-4ac4-9555-bc512a54ca12\") " pod="kube-system/kube-proxy-dm8t9" Dec 13 02:19:07.320884 kubelet[1675]: I1213 02:19:07.320826 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-hostproc\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.320884 kubelet[1675]: I1213 02:19:07.320859 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-lib-modules\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.321085 kubelet[1675]: I1213 02:19:07.320893 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-host-proc-sys-kernel\") pod \"cilium-dt2g6\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " pod="kube-system/cilium-dt2g6" Dec 13 02:19:07.390689 sudo[1545]: pam_unix(sudo:session): session closed for user root Dec 13 02:19:07.439285 sshd[1541]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:07.456237 systemd[1]: sshd@4-10.128.0.79:22-139.178.68.195:33906.service: Deactivated successfully. Dec 13 02:19:07.457580 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:19:07.460526 systemd-logind[1321]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:19:07.464201 systemd-logind[1321]: Removed session 5. Dec 13 02:19:07.592552 env[1334]: time="2024-12-13T02:19:07.592468566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dm8t9,Uid:993029a7-3847-4ac4-9555-bc512a54ca12,Namespace:kube-system,Attempt:0,}" Dec 13 02:19:07.598457 env[1334]: time="2024-12-13T02:19:07.598245506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dt2g6,Uid:75b273d3-cac7-4e6e-b2b4-c6cdb2de2538,Namespace:kube-system,Attempt:0,}" Dec 13 02:19:08.116763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176711589.mount: Deactivated successfully. Dec 13 02:19:08.124659 env[1334]: time="2024-12-13T02:19:08.124602986Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:08.126008 env[1334]: time="2024-12-13T02:19:08.125951269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:08.129576 env[1334]: time="2024-12-13T02:19:08.129526203Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:08.130811 env[1334]: time="2024-12-13T02:19:08.130759470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:08.131837 env[1334]: time="2024-12-13T02:19:08.131799152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:08.134256 env[1334]: time="2024-12-13T02:19:08.134206028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:08.135146 env[1334]: time="2024-12-13T02:19:08.135109871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:08.137703 env[1334]: time="2024-12-13T02:19:08.137657538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:08.163655 env[1334]: time="2024-12-13T02:19:08.161610114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:08.163655 env[1334]: time="2024-12-13T02:19:08.161668082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:08.163655 env[1334]: time="2024-12-13T02:19:08.161689718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:08.163655 env[1334]: time="2024-12-13T02:19:08.161887898Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f pid=1734 runtime=io.containerd.runc.v2 Dec 13 02:19:08.166983 env[1334]: time="2024-12-13T02:19:08.166892072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:08.167231 env[1334]: time="2024-12-13T02:19:08.167180947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:08.167408 env[1334]: time="2024-12-13T02:19:08.167361980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:08.167882 env[1334]: time="2024-12-13T02:19:08.167793073Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcbe0c111e7ad25823675270f869eee935ee5c1c8825478cc7ee37cd089e80c7 pid=1735 runtime=io.containerd.runc.v2 Dec 13 02:19:08.251786 env[1334]: time="2024-12-13T02:19:08.251724015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dt2g6,Uid:75b273d3-cac7-4e6e-b2b4-c6cdb2de2538,Namespace:kube-system,Attempt:0,} returns sandbox id \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\"" Dec 13 02:19:08.255361 env[1334]: time="2024-12-13T02:19:08.255293396Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:19:08.263438 env[1334]: time="2024-12-13T02:19:08.262745181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dm8t9,Uid:993029a7-3847-4ac4-9555-bc512a54ca12,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcbe0c111e7ad25823675270f869eee935ee5c1c8825478cc7ee37cd089e80c7\"" Dec 13 02:19:08.279476 kubelet[1675]: E1213 02:19:08.279426 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:09.280083 kubelet[1675]: E1213 02:19:09.280024 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:10.280340 kubelet[1675]: E1213 02:19:10.280256 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:11.280805 kubelet[1675]: E1213 02:19:11.280759 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:12.281255 kubelet[1675]: E1213 02:19:12.281206 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:13.282472 kubelet[1675]: E1213 02:19:13.282416 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:14.283437 kubelet[1675]: E1213 02:19:14.283369 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:15.028682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050329356.mount: Deactivated successfully. Dec 13 02:19:15.283964 kubelet[1675]: E1213 02:19:15.283608 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:16.284055 kubelet[1675]: E1213 02:19:16.284006 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:17.284284 kubelet[1675]: E1213 02:19:17.284198 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:18.259181 env[1334]: time="2024-12-13T02:19:18.259114610Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:18.261903 env[1334]: time="2024-12-13T02:19:18.261858211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:18.264137 env[1334]: time="2024-12-13T02:19:18.264095256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:18.264988 env[1334]: time="2024-12-13T02:19:18.264945119Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:19:18.265987 env[1334]: time="2024-12-13T02:19:18.265951699Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:19:18.268623 env[1334]: time="2024-12-13T02:19:18.268562385Z" level=info msg="CreateContainer within sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:19:18.285548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738572333.mount: Deactivated successfully. Dec 13 02:19:18.286076 kubelet[1675]: E1213 02:19:18.285609 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:18.301197 env[1334]: time="2024-12-13T02:19:18.301137726Z" level=info msg="CreateContainer within sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dd11278fc2255f282e909e8a91011172c422bbeea62531853f11f1caee550a18\"" Dec 13 02:19:18.302033 env[1334]: time="2024-12-13T02:19:18.301974314Z" level=info msg="StartContainer for \"dd11278fc2255f282e909e8a91011172c422bbeea62531853f11f1caee550a18\"" Dec 13 02:19:18.371548 env[1334]: time="2024-12-13T02:19:18.369696259Z" level=info msg="StartContainer for \"dd11278fc2255f282e909e8a91011172c422bbeea62531853f11f1caee550a18\" returns successfully" Dec 13 02:19:19.280160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd11278fc2255f282e909e8a91011172c422bbeea62531853f11f1caee550a18-rootfs.mount: Deactivated successfully. Dec 13 02:19:19.286773 kubelet[1675]: E1213 02:19:19.286716 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:20.229490 env[1334]: time="2024-12-13T02:19:20.229368053Z" level=info msg="shim disconnected" id=dd11278fc2255f282e909e8a91011172c422bbeea62531853f11f1caee550a18 Dec 13 02:19:20.230248 env[1334]: time="2024-12-13T02:19:20.229569684Z" level=warning msg="cleaning up after shim disconnected" id=dd11278fc2255f282e909e8a91011172c422bbeea62531853f11f1caee550a18 namespace=k8s.io Dec 13 02:19:20.230248 env[1334]: time="2024-12-13T02:19:20.229601138Z" level=info msg="cleaning up dead shim" Dec 13 02:19:20.242972 env[1334]: time="2024-12-13T02:19:20.242914272Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1855 runtime=io.containerd.runc.v2\n" Dec 13 02:19:20.287610 kubelet[1675]: E1213 02:19:20.287549 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:20.514244 env[1334]: time="2024-12-13T02:19:20.514047016Z" level=info msg="CreateContainer within sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:19:20.546977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1716596482.mount: Deactivated successfully. Dec 13 02:19:20.563202 env[1334]: time="2024-12-13T02:19:20.563149015Z" level=info msg="CreateContainer within sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3766927637267af642b06951243edd9b73a20060e3eb4f34f420379dfa27d5c0\"" Dec 13 02:19:20.564892 env[1334]: time="2024-12-13T02:19:20.564849268Z" level=info msg="StartContainer for \"3766927637267af642b06951243edd9b73a20060e3eb4f34f420379dfa27d5c0\"" Dec 13 02:19:20.662232 env[1334]: time="2024-12-13T02:19:20.662162786Z" level=info msg="StartContainer for \"3766927637267af642b06951243edd9b73a20060e3eb4f34f420379dfa27d5c0\" returns successfully" Dec 13 02:19:20.682371 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:19:20.682690 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:19:20.683690 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:19:20.686142 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:19:20.700577 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:19:20.739158 env[1334]: time="2024-12-13T02:19:20.739099822Z" level=info msg="shim disconnected" id=3766927637267af642b06951243edd9b73a20060e3eb4f34f420379dfa27d5c0 Dec 13 02:19:20.739559 env[1334]: time="2024-12-13T02:19:20.739527127Z" level=warning msg="cleaning up after shim disconnected" id=3766927637267af642b06951243edd9b73a20060e3eb4f34f420379dfa27d5c0 namespace=k8s.io Dec 13 02:19:20.739697 env[1334]: time="2024-12-13T02:19:20.739676410Z" level=info msg="cleaning up dead shim" Dec 13 02:19:20.767669 env[1334]: time="2024-12-13T02:19:20.767554034Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1921 runtime=io.containerd.runc.v2\n" Dec 13 02:19:21.287749 kubelet[1675]: E1213 02:19:21.287698 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:21.521621 env[1334]: time="2024-12-13T02:19:21.518335292Z" level=info msg="CreateContainer within sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:19:21.525830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985755465.mount: Deactivated successfully. Dec 13 02:19:21.554096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount378203582.mount: Deactivated successfully. Dec 13 02:19:21.564346 env[1334]: time="2024-12-13T02:19:21.564291050Z" level=info msg="CreateContainer within sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e3d02551b699cfa033035f0688f81055ae131b92a3b1c29850976df94b0690d0\"" Dec 13 02:19:21.565504 env[1334]: time="2024-12-13T02:19:21.565440254Z" level=info msg="StartContainer for \"e3d02551b699cfa033035f0688f81055ae131b92a3b1c29850976df94b0690d0\"" Dec 13 02:19:21.685339 env[1334]: time="2024-12-13T02:19:21.685274695Z" level=info msg="StartContainer for \"e3d02551b699cfa033035f0688f81055ae131b92a3b1c29850976df94b0690d0\" returns successfully" Dec 13 02:19:21.915896 env[1334]: time="2024-12-13T02:19:21.915341911Z" level=info msg="shim disconnected" id=e3d02551b699cfa033035f0688f81055ae131b92a3b1c29850976df94b0690d0 Dec 13 02:19:21.916325 env[1334]: time="2024-12-13T02:19:21.916283779Z" level=warning msg="cleaning up after shim disconnected" id=e3d02551b699cfa033035f0688f81055ae131b92a3b1c29850976df94b0690d0 namespace=k8s.io Dec 13 02:19:21.916523 env[1334]: time="2024-12-13T02:19:21.916488679Z" level=info msg="cleaning up dead shim" Dec 13 02:19:21.929814 env[1334]: time="2024-12-13T02:19:21.929769717Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1981 runtime=io.containerd.runc.v2\n" Dec 13 02:19:22.049627 env[1334]: time="2024-12-13T02:19:22.049553779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:22.052017 env[1334]: time="2024-12-13T02:19:22.051966006Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:22.054170 env[1334]: time="2024-12-13T02:19:22.054122259Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:22.055862 env[1334]: time="2024-12-13T02:19:22.055821420Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:22.056523 env[1334]: time="2024-12-13T02:19:22.056482542Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:19:22.059341 env[1334]: time="2024-12-13T02:19:22.059288411Z" level=info msg="CreateContainer within sandbox \"bcbe0c111e7ad25823675270f869eee935ee5c1c8825478cc7ee37cd089e80c7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:19:22.074704 env[1334]: time="2024-12-13T02:19:22.074645847Z" level=info msg="CreateContainer within sandbox \"bcbe0c111e7ad25823675270f869eee935ee5c1c8825478cc7ee37cd089e80c7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a29de2bc0291e76af6d39a6ab46468682e8bb8984f5b48d410222978f1130a1\"" Dec 13 02:19:22.075781 env[1334]: time="2024-12-13T02:19:22.075738259Z" level=info msg="StartContainer for \"6a29de2bc0291e76af6d39a6ab46468682e8bb8984f5b48d410222978f1130a1\"" Dec 13 02:19:22.157928 env[1334]: time="2024-12-13T02:19:22.157826852Z" level=info msg="StartContainer for \"6a29de2bc0291e76af6d39a6ab46468682e8bb8984f5b48d410222978f1130a1\" returns successfully" Dec 13 02:19:22.288724 kubelet[1675]: E1213 02:19:22.288571 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:22.527305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3d02551b699cfa033035f0688f81055ae131b92a3b1c29850976df94b0690d0-rootfs.mount: Deactivated successfully. Dec 13 02:19:22.532291 env[1334]: time="2024-12-13T02:19:22.532218916Z" level=info msg="CreateContainer within sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:19:22.556826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085709736.mount: Deactivated successfully. Dec 13 02:19:22.567922 env[1334]: time="2024-12-13T02:19:22.567741487Z" level=info msg="CreateContainer within sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3a79e9dcf5915e5b819c5265861608c5c7358b6ffd8d77c0797b639ed169deae\"" Dec 13 02:19:22.576495 env[1334]: time="2024-12-13T02:19:22.575913770Z" level=info msg="StartContainer for \"3a79e9dcf5915e5b819c5265861608c5c7358b6ffd8d77c0797b639ed169deae\"" Dec 13 02:19:22.595108 kubelet[1675]: I1213 02:19:22.595068 1675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dm8t9" podStartSLOduration=2.802176863 podStartE2EDuration="16.595004501s" podCreationTimestamp="2024-12-13 02:19:06 +0000 UTC" firstStartedPulling="2024-12-13 02:19:08.26401382 +0000 UTC m=+2.386917496" lastFinishedPulling="2024-12-13 02:19:22.056841457 +0000 UTC m=+16.179745134" observedRunningTime="2024-12-13 02:19:22.594771695 +0000 UTC m=+16.717675382" watchObservedRunningTime="2024-12-13 02:19:22.595004501 +0000 UTC m=+16.717908209" Dec 13 02:19:22.661596 env[1334]: time="2024-12-13T02:19:22.661540469Z" level=info msg="StartContainer for \"3a79e9dcf5915e5b819c5265861608c5c7358b6ffd8d77c0797b639ed169deae\" returns successfully" Dec 13 02:19:22.704425 env[1334]: time="2024-12-13T02:19:22.704356785Z" level=info msg="shim disconnected" id=3a79e9dcf5915e5b819c5265861608c5c7358b6ffd8d77c0797b639ed169deae Dec 13 02:19:22.704425 env[1334]: time="2024-12-13T02:19:22.704424565Z" level=warning msg="cleaning up after shim disconnected" id=3a79e9dcf5915e5b819c5265861608c5c7358b6ffd8d77c0797b639ed169deae namespace=k8s.io Dec 13 02:19:22.704838 env[1334]: time="2024-12-13T02:19:22.704463149Z" level=info msg="cleaning up dead shim" Dec 13 02:19:22.716282 env[1334]: time="2024-12-13T02:19:22.716218711Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2198 runtime=io.containerd.runc.v2\n" Dec 13 02:19:23.289467 kubelet[1675]: E1213 02:19:23.289388 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:23.525221 systemd[1]: run-containerd-runc-k8s.io-3a79e9dcf5915e5b819c5265861608c5c7358b6ffd8d77c0797b639ed169deae-runc.ehuHro.mount: Deactivated successfully. Dec 13 02:19:23.525486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a79e9dcf5915e5b819c5265861608c5c7358b6ffd8d77c0797b639ed169deae-rootfs.mount: Deactivated successfully. Dec 13 02:19:23.553729 env[1334]: time="2024-12-13T02:19:23.553194408Z" level=info msg="CreateContainer within sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:19:23.563271 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:19:23.582268 env[1334]: time="2024-12-13T02:19:23.582219000Z" level=info msg="CreateContainer within sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b\"" Dec 13 02:19:23.583107 env[1334]: time="2024-12-13T02:19:23.583063575Z" level=info msg="StartContainer for \"716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b\"" Dec 13 02:19:23.659262 env[1334]: time="2024-12-13T02:19:23.659211007Z" level=info msg="StartContainer for \"716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b\" returns successfully" Dec 13 02:19:23.846374 kubelet[1675]: I1213 02:19:23.846217 1675 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:19:24.185482 kernel: Initializing XFRM netlink socket Dec 13 02:19:24.290721 kubelet[1675]: E1213 02:19:24.290655 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:24.573915 kubelet[1675]: I1213 02:19:24.573795 1675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dt2g6" podStartSLOduration=8.562590806 podStartE2EDuration="18.573725506s" podCreationTimestamp="2024-12-13 02:19:06 +0000 UTC" firstStartedPulling="2024-12-13 02:19:08.254273346 +0000 UTC m=+2.377177009" lastFinishedPulling="2024-12-13 02:19:18.265408033 +0000 UTC m=+12.388311709" observedRunningTime="2024-12-13 02:19:24.573710292 +0000 UTC m=+18.696613990" watchObservedRunningTime="2024-12-13 02:19:24.573725506 +0000 UTC m=+18.696629175" Dec 13 02:19:25.292173 kubelet[1675]: E1213 02:19:25.292102 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:25.588251 kubelet[1675]: I1213 02:19:25.585225 1675 topology_manager.go:215] "Topology Admit Handler" podUID="78d511e9-8f8b-4f5e-bebf-0b79223ff913" podNamespace="default" podName="nginx-deployment-6d5f899847-f967n" Dec 13 02:19:25.588251 kubelet[1675]: W1213 02:19:25.587858 1675 reflector.go:539] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.128.0.79" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '10.128.0.79' and this object Dec 13 02:19:25.588251 kubelet[1675]: E1213 02:19:25.587945 1675 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.128.0.79" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '10.128.0.79' and this object Dec 13 02:19:25.750482 kubelet[1675]: I1213 02:19:25.750414 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lzjc\" (UniqueName: \"kubernetes.io/projected/78d511e9-8f8b-4f5e-bebf-0b79223ff913-kube-api-access-7lzjc\") pod \"nginx-deployment-6d5f899847-f967n\" (UID: \"78d511e9-8f8b-4f5e-bebf-0b79223ff913\") " pod="default/nginx-deployment-6d5f899847-f967n" Dec 13 02:19:25.844399 systemd-networkd[1088]: cilium_host: Link UP Dec 13 02:19:25.862596 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:19:25.862810 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:19:25.863015 systemd-networkd[1088]: cilium_net: Link UP Dec 13 02:19:25.863378 systemd-networkd[1088]: cilium_net: Gained carrier Dec 13 02:19:25.865097 systemd-networkd[1088]: cilium_host: Gained carrier Dec 13 02:19:25.873636 systemd-networkd[1088]: cilium_net: Gained IPv6LL Dec 13 02:19:26.011968 systemd-networkd[1088]: cilium_vxlan: Link UP Dec 13 02:19:26.011983 systemd-networkd[1088]: cilium_vxlan: Gained carrier Dec 13 02:19:26.274509 kernel: NET: Registered PF_ALG protocol family Dec 13 02:19:26.278077 kubelet[1675]: E1213 02:19:26.278026 1675 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:26.292603 kubelet[1675]: E1213 02:19:26.292547 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:26.738097 systemd-networkd[1088]: cilium_host: Gained IPv6LL Dec 13 02:19:26.791495 env[1334]: time="2024-12-13T02:19:26.791420068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-f967n,Uid:78d511e9-8f8b-4f5e-bebf-0b79223ff913,Namespace:default,Attempt:0,}" Dec 13 02:19:27.127044 systemd-networkd[1088]: lxc_health: Link UP Dec 13 02:19:27.141553 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:19:27.150796 systemd-networkd[1088]: lxc_health: Gained carrier Dec 13 02:19:27.293276 kubelet[1675]: E1213 02:19:27.293184 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:27.365154 systemd-networkd[1088]: lxc055b5334d15e: Link UP Dec 13 02:19:27.381776 kernel: eth0: renamed from tmp529b1 Dec 13 02:19:27.396481 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc055b5334d15e: link becomes ready Dec 13 02:19:27.397800 systemd-networkd[1088]: lxc055b5334d15e: Gained carrier Dec 13 02:19:27.697669 systemd-networkd[1088]: cilium_vxlan: Gained IPv6LL Dec 13 02:19:28.294390 kubelet[1675]: E1213 02:19:28.294326 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:28.465681 systemd-networkd[1088]: lxc055b5334d15e: Gained IPv6LL Dec 13 02:19:29.041763 systemd-networkd[1088]: lxc_health: Gained IPv6LL Dec 13 02:19:29.295108 kubelet[1675]: E1213 02:19:29.294963 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:30.296510 kubelet[1675]: E1213 02:19:30.296434 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:31.298146 kubelet[1675]: E1213 02:19:31.298095 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:32.154143 env[1334]: time="2024-12-13T02:19:32.154017555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:32.154143 env[1334]: time="2024-12-13T02:19:32.154071009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:32.154143 env[1334]: time="2024-12-13T02:19:32.154091429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:32.155119 env[1334]: time="2024-12-13T02:19:32.155045541Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/529b160765909f84578a0e206186a7d4bd7afe08708b2fa770a821e3dcc44905 pid=2719 runtime=io.containerd.runc.v2 Dec 13 02:19:32.183982 systemd[1]: run-containerd-runc-k8s.io-529b160765909f84578a0e206186a7d4bd7afe08708b2fa770a821e3dcc44905-runc.cLgscJ.mount: Deactivated successfully. Dec 13 02:19:32.242798 env[1334]: time="2024-12-13T02:19:32.242743453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-f967n,Uid:78d511e9-8f8b-4f5e-bebf-0b79223ff913,Namespace:default,Attempt:0,} returns sandbox id \"529b160765909f84578a0e206186a7d4bd7afe08708b2fa770a821e3dcc44905\"" Dec 13 02:19:32.245595 env[1334]: time="2024-12-13T02:19:32.245489985Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:19:32.299474 kubelet[1675]: E1213 02:19:32.299380 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:33.299890 kubelet[1675]: E1213 02:19:33.299845 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:33.300713 kubelet[1675]: I1213 02:19:33.300684 1675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:19:34.300394 kubelet[1675]: E1213 02:19:34.300301 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:34.722667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015983423.mount: Deactivated successfully. Dec 13 02:19:35.301427 kubelet[1675]: E1213 02:19:35.301335 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:36.302224 kubelet[1675]: E1213 02:19:36.302177 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:36.432992 env[1334]: time="2024-12-13T02:19:36.431318753Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:36.437615 env[1334]: time="2024-12-13T02:19:36.437556488Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:36.439339 env[1334]: time="2024-12-13T02:19:36.439297913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:36.441851 env[1334]: time="2024-12-13T02:19:36.441810668Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:36.443151 env[1334]: time="2024-12-13T02:19:36.443015210Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:19:36.448397 env[1334]: time="2024-12-13T02:19:36.447282165Z" level=info msg="CreateContainer within sandbox \"529b160765909f84578a0e206186a7d4bd7afe08708b2fa770a821e3dcc44905\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 02:19:36.477367 env[1334]: time="2024-12-13T02:19:36.477298628Z" level=info msg="CreateContainer within sandbox \"529b160765909f84578a0e206186a7d4bd7afe08708b2fa770a821e3dcc44905\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a548069a30aa6307f69ee013ebb00ea969590b0c2a98a823a07bac4011b0eb39\"" Dec 13 02:19:36.479902 env[1334]: time="2024-12-13T02:19:36.479844739Z" level=info msg="StartContainer for \"a548069a30aa6307f69ee013ebb00ea969590b0c2a98a823a07bac4011b0eb39\"" Dec 13 02:19:36.554437 env[1334]: time="2024-12-13T02:19:36.553542520Z" level=info msg="StartContainer for \"a548069a30aa6307f69ee013ebb00ea969590b0c2a98a823a07bac4011b0eb39\" returns successfully" Dec 13 02:19:36.593679 kubelet[1675]: I1213 02:19:36.593620 1675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-f967n" podStartSLOduration=7.392701443 podStartE2EDuration="11.59356167s" podCreationTimestamp="2024-12-13 02:19:25 +0000 UTC" firstStartedPulling="2024-12-13 02:19:32.24477238 +0000 UTC m=+26.367676058" lastFinishedPulling="2024-12-13 02:19:36.445632609 +0000 UTC m=+30.568536285" observedRunningTime="2024-12-13 02:19:36.59308833 +0000 UTC m=+30.715992030" watchObservedRunningTime="2024-12-13 02:19:36.59356167 +0000 UTC m=+30.716465356" Dec 13 02:19:37.303053 kubelet[1675]: E1213 02:19:37.302996 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:37.825637 update_engine[1324]: I1213 02:19:37.825553 1324 update_attempter.cc:509] Updating boot flags... Dec 13 02:19:38.303398 kubelet[1675]: E1213 02:19:38.303338 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:39.304315 kubelet[1675]: E1213 02:19:39.304240 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:40.304965 kubelet[1675]: E1213 02:19:40.304915 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:40.320741 kubelet[1675]: I1213 02:19:40.320694 1675 topology_manager.go:215] "Topology Admit Handler" podUID="c3720025-0a8e-415f-910e-4df55dc3ac9a" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 02:19:40.457106 kubelet[1675]: I1213 02:19:40.457056 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmblh\" (UniqueName: \"kubernetes.io/projected/c3720025-0a8e-415f-910e-4df55dc3ac9a-kube-api-access-hmblh\") pod \"nfs-server-provisioner-0\" (UID: \"c3720025-0a8e-415f-910e-4df55dc3ac9a\") " pod="default/nfs-server-provisioner-0" Dec 13 02:19:40.457362 kubelet[1675]: I1213 02:19:40.457223 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c3720025-0a8e-415f-910e-4df55dc3ac9a-data\") pod \"nfs-server-provisioner-0\" (UID: \"c3720025-0a8e-415f-910e-4df55dc3ac9a\") " pod="default/nfs-server-provisioner-0" Dec 13 02:19:40.625482 env[1334]: time="2024-12-13T02:19:40.624928764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c3720025-0a8e-415f-910e-4df55dc3ac9a,Namespace:default,Attempt:0,}" Dec 13 02:19:40.678025 systemd-networkd[1088]: lxc2d64d84593d5: Link UP Dec 13 02:19:40.687550 kernel: eth0: renamed from tmpcb7d6 Dec 13 02:19:40.707854 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:19:40.707973 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2d64d84593d5: link becomes ready Dec 13 02:19:40.708294 systemd-networkd[1088]: lxc2d64d84593d5: Gained carrier Dec 13 02:19:40.889350 env[1334]: time="2024-12-13T02:19:40.889144215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:40.889350 env[1334]: time="2024-12-13T02:19:40.889203439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:40.889350 env[1334]: time="2024-12-13T02:19:40.889222366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:40.889934 env[1334]: time="2024-12-13T02:19:40.889840236Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb7d69dcd5a52203532510cb42f2e0315e7c6901d018e53f277c2eb785ec9040 pid=2857 runtime=io.containerd.runc.v2 Dec 13 02:19:40.982425 env[1334]: time="2024-12-13T02:19:40.981748459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c3720025-0a8e-415f-910e-4df55dc3ac9a,Namespace:default,Attempt:0,} returns sandbox id \"cb7d69dcd5a52203532510cb42f2e0315e7c6901d018e53f277c2eb785ec9040\"" Dec 13 02:19:40.984075 env[1334]: time="2024-12-13T02:19:40.984002925Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 02:19:41.305991 kubelet[1675]: E1213 02:19:41.305935 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:42.306401 kubelet[1675]: E1213 02:19:42.306349 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:42.609937 systemd-networkd[1088]: lxc2d64d84593d5: Gained IPv6LL Dec 13 02:19:43.306709 kubelet[1675]: E1213 02:19:43.306657 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:43.580253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2220492249.mount: Deactivated successfully. Dec 13 02:19:44.307363 kubelet[1675]: E1213 02:19:44.307288 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:45.308524 kubelet[1675]: E1213 02:19:45.308438 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:45.961308 env[1334]: time="2024-12-13T02:19:45.961216298Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:45.964395 env[1334]: time="2024-12-13T02:19:45.964329361Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:45.966899 env[1334]: time="2024-12-13T02:19:45.966856201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:45.969671 env[1334]: time="2024-12-13T02:19:45.969620331Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:45.970918 env[1334]: time="2024-12-13T02:19:45.970874345Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 02:19:45.975243 env[1334]: time="2024-12-13T02:19:45.975180994Z" level=info msg="CreateContainer within sandbox \"cb7d69dcd5a52203532510cb42f2e0315e7c6901d018e53f277c2eb785ec9040\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 02:19:45.991689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4048502441.mount: Deactivated successfully. Dec 13 02:19:45.999043 env[1334]: time="2024-12-13T02:19:45.998961717Z" level=info msg="CreateContainer within sandbox \"cb7d69dcd5a52203532510cb42f2e0315e7c6901d018e53f277c2eb785ec9040\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"5625608102b396db15a68b9cab50c25ae04be85788d7c78e85aafbbb7473fa8c\"" Dec 13 02:19:45.999883 env[1334]: time="2024-12-13T02:19:45.999818989Z" level=info msg="StartContainer for \"5625608102b396db15a68b9cab50c25ae04be85788d7c78e85aafbbb7473fa8c\"" Dec 13 02:19:46.084637 env[1334]: time="2024-12-13T02:19:46.084584453Z" level=info msg="StartContainer for \"5625608102b396db15a68b9cab50c25ae04be85788d7c78e85aafbbb7473fa8c\" returns successfully" Dec 13 02:19:46.278169 kubelet[1675]: E1213 02:19:46.278102 1675 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:46.309673 kubelet[1675]: E1213 02:19:46.309606 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:46.644240 kubelet[1675]: I1213 02:19:46.643785 1675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.6556904129999999 podStartE2EDuration="6.643732296s" podCreationTimestamp="2024-12-13 02:19:40 +0000 UTC" firstStartedPulling="2024-12-13 02:19:40.983353824 +0000 UTC m=+35.106257513" lastFinishedPulling="2024-12-13 02:19:45.971395716 +0000 UTC m=+40.094299396" observedRunningTime="2024-12-13 02:19:46.64333371 +0000 UTC m=+40.766237394" watchObservedRunningTime="2024-12-13 02:19:46.643732296 +0000 UTC m=+40.766635982" Dec 13 02:19:47.310177 kubelet[1675]: E1213 02:19:47.310124 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:48.311155 kubelet[1675]: E1213 02:19:48.311093 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:49.311921 kubelet[1675]: E1213 02:19:49.311852 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:50.312893 kubelet[1675]: E1213 02:19:50.312825 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:51.313696 kubelet[1675]: E1213 02:19:51.313611 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:52.314471 kubelet[1675]: E1213 02:19:52.314390 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:53.315169 kubelet[1675]: E1213 02:19:53.315103 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:54.315339 kubelet[1675]: E1213 02:19:54.315274 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:55.316386 kubelet[1675]: E1213 02:19:55.316323 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:55.701439 kubelet[1675]: I1213 02:19:55.701285 1675 topology_manager.go:215] "Topology Admit Handler" podUID="2b3659a7-7b50-4b57-80d8-b4e13df5ba87" podNamespace="default" podName="test-pod-1" Dec 13 02:19:55.846791 kubelet[1675]: I1213 02:19:55.846747 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n94l\" (UniqueName: \"kubernetes.io/projected/2b3659a7-7b50-4b57-80d8-b4e13df5ba87-kube-api-access-2n94l\") pod \"test-pod-1\" (UID: \"2b3659a7-7b50-4b57-80d8-b4e13df5ba87\") " pod="default/test-pod-1" Dec 13 02:19:55.846791 kubelet[1675]: I1213 02:19:55.846817 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fb512b99-d7bb-4339-bcf4-9a53681397a3\" (UniqueName: \"kubernetes.io/nfs/2b3659a7-7b50-4b57-80d8-b4e13df5ba87-pvc-fb512b99-d7bb-4339-bcf4-9a53681397a3\") pod \"test-pod-1\" (UID: \"2b3659a7-7b50-4b57-80d8-b4e13df5ba87\") " pod="default/test-pod-1" Dec 13 02:19:55.992482 kernel: FS-Cache: Loaded Dec 13 02:19:56.058496 kernel: RPC: Registered named UNIX socket transport module. Dec 13 02:19:56.058681 kernel: RPC: Registered udp transport module. Dec 13 02:19:56.058727 kernel: RPC: Registered tcp transport module. Dec 13 02:19:56.060021 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 02:19:56.147678 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 02:19:56.317745 kubelet[1675]: E1213 02:19:56.317238 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:56.375377 kernel: NFS: Registering the id_resolver key type Dec 13 02:19:56.375577 kernel: Key type id_resolver registered Dec 13 02:19:56.375624 kernel: Key type id_legacy registered Dec 13 02:19:56.434071 nfsidmap[2976]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Dec 13 02:19:56.445828 nfsidmap[2977]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Dec 13 02:19:56.606866 env[1334]: time="2024-12-13T02:19:56.606277494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2b3659a7-7b50-4b57-80d8-b4e13df5ba87,Namespace:default,Attempt:0,}" Dec 13 02:19:56.649844 systemd-networkd[1088]: lxc384f024bedf2: Link UP Dec 13 02:19:56.660564 kernel: eth0: renamed from tmp7aa50 Dec 13 02:19:56.680802 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:19:56.680928 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc384f024bedf2: link becomes ready Dec 13 02:19:56.688140 systemd-networkd[1088]: lxc384f024bedf2: Gained carrier Dec 13 02:19:56.902555 env[1334]: time="2024-12-13T02:19:56.901945898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:56.903146 env[1334]: time="2024-12-13T02:19:56.903101743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:56.903334 env[1334]: time="2024-12-13T02:19:56.903303745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:56.903659 env[1334]: time="2024-12-13T02:19:56.903622909Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7aa5004eb09e30e9233c40a88d87dae0782b4aaaba9ba4e929a2043334679d20 pid=3000 runtime=io.containerd.runc.v2 Dec 13 02:19:56.987614 env[1334]: time="2024-12-13T02:19:56.987562679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2b3659a7-7b50-4b57-80d8-b4e13df5ba87,Namespace:default,Attempt:0,} returns sandbox id \"7aa5004eb09e30e9233c40a88d87dae0782b4aaaba9ba4e929a2043334679d20\"" Dec 13 02:19:56.990037 env[1334]: time="2024-12-13T02:19:56.989996232Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:19:57.190251 env[1334]: time="2024-12-13T02:19:57.189659923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:57.193118 env[1334]: time="2024-12-13T02:19:57.193078952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:57.195739 env[1334]: time="2024-12-13T02:19:57.195696097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:57.198722 env[1334]: time="2024-12-13T02:19:57.198681710Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:57.199596 env[1334]: time="2024-12-13T02:19:57.199556237Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:19:57.202351 env[1334]: time="2024-12-13T02:19:57.202297984Z" level=info msg="CreateContainer within sandbox \"7aa5004eb09e30e9233c40a88d87dae0782b4aaaba9ba4e929a2043334679d20\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 02:19:57.220414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3272810475.mount: Deactivated successfully. Dec 13 02:19:57.233797 env[1334]: time="2024-12-13T02:19:57.233712430Z" level=info msg="CreateContainer within sandbox \"7aa5004eb09e30e9233c40a88d87dae0782b4aaaba9ba4e929a2043334679d20\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"64d342ebaa8ea59cb67c7101b494137bb6f12a29c44bc04ca999b792e2c5b4f5\"" Dec 13 02:19:57.234743 env[1334]: time="2024-12-13T02:19:57.234702431Z" level=info msg="StartContainer for \"64d342ebaa8ea59cb67c7101b494137bb6f12a29c44bc04ca999b792e2c5b4f5\"" Dec 13 02:19:57.304484 env[1334]: time="2024-12-13T02:19:57.302569325Z" level=info msg="StartContainer for \"64d342ebaa8ea59cb67c7101b494137bb6f12a29c44bc04ca999b792e2c5b4f5\" returns successfully" Dec 13 02:19:57.318801 kubelet[1675]: E1213 02:19:57.318730 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:57.841857 systemd-networkd[1088]: lxc384f024bedf2: Gained IPv6LL Dec 13 02:19:58.319729 kubelet[1675]: E1213 02:19:58.319664 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:19:59.320703 kubelet[1675]: E1213 02:19:59.320638 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:00.321830 kubelet[1675]: E1213 02:20:00.321762 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:00.719326 kubelet[1675]: I1213 02:20:00.718905 1675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.508302054 podStartE2EDuration="19.718851182s" podCreationTimestamp="2024-12-13 02:19:41 +0000 UTC" firstStartedPulling="2024-12-13 02:19:56.989346607 +0000 UTC m=+51.112250281" lastFinishedPulling="2024-12-13 02:19:57.199895741 +0000 UTC m=+51.322799409" observedRunningTime="2024-12-13 02:19:57.67771382 +0000 UTC m=+51.800617505" watchObservedRunningTime="2024-12-13 02:20:00.718851182 +0000 UTC m=+54.841754869" Dec 13 02:20:00.751228 systemd[1]: run-containerd-runc-k8s.io-716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b-runc.ekrmhQ.mount: Deactivated successfully. Dec 13 02:20:00.771207 env[1334]: time="2024-12-13T02:20:00.770823520Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:20:00.777464 env[1334]: time="2024-12-13T02:20:00.777378870Z" level=info msg="StopContainer for \"716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b\" with timeout 2 (s)" Dec 13 02:20:00.777876 env[1334]: time="2024-12-13T02:20:00.777834819Z" level=info msg="Stop container \"716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b\" with signal terminated" Dec 13 02:20:00.788128 systemd-networkd[1088]: lxc_health: Link DOWN Dec 13 02:20:00.788149 systemd-networkd[1088]: lxc_health: Lost carrier Dec 13 02:20:00.840213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b-rootfs.mount: Deactivated successfully. Dec 13 02:20:01.322287 kubelet[1675]: E1213 02:20:01.322222 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:01.403686 kubelet[1675]: E1213 02:20:01.403629 1675 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:20:02.323467 kubelet[1675]: E1213 02:20:02.323403 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:02.615881 env[1334]: time="2024-12-13T02:20:02.615714457Z" level=info msg="shim disconnected" id=716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b Dec 13 02:20:02.615881 env[1334]: time="2024-12-13T02:20:02.615780427Z" level=warning msg="cleaning up after shim disconnected" id=716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b namespace=k8s.io Dec 13 02:20:02.615881 env[1334]: time="2024-12-13T02:20:02.615795076Z" level=info msg="cleaning up dead shim" Dec 13 02:20:02.627909 env[1334]: time="2024-12-13T02:20:02.627527456Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3132 runtime=io.containerd.runc.v2\n" Dec 13 02:20:02.631971 env[1334]: time="2024-12-13T02:20:02.631925843Z" level=info msg="StopContainer for \"716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b\" returns successfully" Dec 13 02:20:02.633081 env[1334]: time="2024-12-13T02:20:02.633045594Z" level=info msg="StopPodSandbox for \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\"" Dec 13 02:20:02.633304 env[1334]: time="2024-12-13T02:20:02.633273832Z" level=info msg="Container to stop \"3766927637267af642b06951243edd9b73a20060e3eb4f34f420379dfa27d5c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:20:02.633436 env[1334]: time="2024-12-13T02:20:02.633403610Z" level=info msg="Container to stop \"e3d02551b699cfa033035f0688f81055ae131b92a3b1c29850976df94b0690d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:20:02.633591 env[1334]: time="2024-12-13T02:20:02.633563475Z" level=info msg="Container to stop \"3a79e9dcf5915e5b819c5265861608c5c7358b6ffd8d77c0797b639ed169deae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:20:02.633715 env[1334]: time="2024-12-13T02:20:02.633685174Z" level=info msg="Container to stop \"dd11278fc2255f282e909e8a91011172c422bbeea62531853f11f1caee550a18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:20:02.633819 env[1334]: time="2024-12-13T02:20:02.633795325Z" level=info msg="Container to stop \"716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:20:02.637642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f-shm.mount: Deactivated successfully. Dec 13 02:20:02.673412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f-rootfs.mount: Deactivated successfully. Dec 13 02:20:02.682410 env[1334]: time="2024-12-13T02:20:02.681490237Z" level=info msg="shim disconnected" id=250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f Dec 13 02:20:02.682670 env[1334]: time="2024-12-13T02:20:02.682416966Z" level=warning msg="cleaning up after shim disconnected" id=250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f namespace=k8s.io Dec 13 02:20:02.683050 env[1334]: time="2024-12-13T02:20:02.682434163Z" level=info msg="cleaning up dead shim" Dec 13 02:20:02.694978 env[1334]: time="2024-12-13T02:20:02.694929439Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3164 runtime=io.containerd.runc.v2\n" Dec 13 02:20:02.695378 env[1334]: time="2024-12-13T02:20:02.695340808Z" level=info msg="TearDown network for sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" successfully" Dec 13 02:20:02.697232 env[1334]: time="2024-12-13T02:20:02.695376449Z" level=info msg="StopPodSandbox for \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" returns successfully" Dec 13 02:20:02.796190 kubelet[1675]: I1213 02:20:02.796128 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-xtables-lock\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796190 kubelet[1675]: I1213 02:20:02.796187 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cni-path\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796534 kubelet[1675]: I1213 02:20:02.796232 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-hubble-tls\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796534 kubelet[1675]: I1213 02:20:02.796272 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-host-proc-sys-kernel\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796534 kubelet[1675]: I1213 02:20:02.796304 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-config-path\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796534 kubelet[1675]: I1213 02:20:02.796335 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj2p9\" (UniqueName: \"kubernetes.io/projected/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-kube-api-access-nj2p9\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796534 kubelet[1675]: I1213 02:20:02.796362 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-cgroup\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796534 kubelet[1675]: I1213 02:20:02.796430 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-host-proc-sys-net\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796857 kubelet[1675]: I1213 02:20:02.796477 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-bpf-maps\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796857 kubelet[1675]: I1213 02:20:02.796511 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-etc-cni-netd\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796857 kubelet[1675]: I1213 02:20:02.796546 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-clustermesh-secrets\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796857 kubelet[1675]: I1213 02:20:02.796578 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-hostproc\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796857 kubelet[1675]: I1213 02:20:02.796614 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-run\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.796857 kubelet[1675]: I1213 02:20:02.796655 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-lib-modules\") pod \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\" (UID: \"75b273d3-cac7-4e6e-b2b4-c6cdb2de2538\") " Dec 13 02:20:02.797189 kubelet[1675]: I1213 02:20:02.796752 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:02.797189 kubelet[1675]: I1213 02:20:02.796808 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:02.797189 kubelet[1675]: I1213 02:20:02.796842 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cni-path" (OuterVolumeSpecName: "cni-path") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:02.798042 kubelet[1675]: I1213 02:20:02.797466 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:02.798042 kubelet[1675]: I1213 02:20:02.797578 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:02.798042 kubelet[1675]: I1213 02:20:02.797615 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:02.798544 kubelet[1675]: I1213 02:20:02.798326 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:02.804704 systemd[1]: var-lib-kubelet-pods-75b273d3\x2dcac7\x2d4e6e\x2db2b4\x2dc6cdb2de2538-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:20:02.806090 kubelet[1675]: I1213 02:20:02.805757 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-hostproc" (OuterVolumeSpecName: "hostproc") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:02.806090 kubelet[1675]: I1213 02:20:02.805828 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:02.808077 kubelet[1675]: I1213 02:20:02.808043 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:20:02.808286 kubelet[1675]: I1213 02:20:02.808260 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:02.809417 kubelet[1675]: I1213 02:20:02.809386 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:20:02.816102 systemd[1]: var-lib-kubelet-pods-75b273d3\x2dcac7\x2d4e6e\x2db2b4\x2dc6cdb2de2538-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnj2p9.mount: Deactivated successfully. Dec 13 02:20:02.816323 systemd[1]: var-lib-kubelet-pods-75b273d3\x2dcac7\x2d4e6e\x2db2b4\x2dc6cdb2de2538-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:20:02.818537 kubelet[1675]: I1213 02:20:02.818440 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-kube-api-access-nj2p9" (OuterVolumeSpecName: "kube-api-access-nj2p9") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "kube-api-access-nj2p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:20:02.818846 kubelet[1675]: I1213 02:20:02.818807 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" (UID: "75b273d3-cac7-4e6e-b2b4-c6cdb2de2538"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:20:02.898195 kubelet[1675]: I1213 02:20:02.897901 1675 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-host-proc-sys-kernel\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898195 kubelet[1675]: I1213 02:20:02.897953 1675 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-config-path\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898195 kubelet[1675]: I1213 02:20:02.897971 1675 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nj2p9\" (UniqueName: \"kubernetes.io/projected/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-kube-api-access-nj2p9\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898195 kubelet[1675]: I1213 02:20:02.897989 1675 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-hubble-tls\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898195 kubelet[1675]: I1213 02:20:02.898006 1675 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-host-proc-sys-net\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898195 kubelet[1675]: I1213 02:20:02.898023 1675 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-bpf-maps\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898195 kubelet[1675]: I1213 02:20:02.898038 1675 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-etc-cni-netd\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898195 kubelet[1675]: I1213 02:20:02.898053 1675 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-clustermesh-secrets\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898790 kubelet[1675]: I1213 02:20:02.898067 1675 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-hostproc\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898790 kubelet[1675]: I1213 02:20:02.898082 1675 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-cgroup\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898790 kubelet[1675]: I1213 02:20:02.898098 1675 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-lib-modules\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898790 kubelet[1675]: I1213 02:20:02.898112 1675 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cilium-run\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898790 kubelet[1675]: I1213 02:20:02.898128 1675 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-cni-path\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:02.898790 kubelet[1675]: I1213 02:20:02.898143 1675 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538-xtables-lock\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:03.324503 kubelet[1675]: E1213 02:20:03.324437 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:03.691139 kubelet[1675]: I1213 02:20:03.690562 1675 scope.go:117] "RemoveContainer" containerID="716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b" Dec 13 02:20:03.692973 env[1334]: time="2024-12-13T02:20:03.692917705Z" level=info msg="RemoveContainer for \"716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b\"" Dec 13 02:20:03.699949 env[1334]: time="2024-12-13T02:20:03.699894607Z" level=info msg="RemoveContainer for \"716d12ba4a6b5936176fab0e772a5e5a195474e2c11f533768394cd60642654b\" returns successfully" Dec 13 02:20:03.701278 kubelet[1675]: I1213 02:20:03.701250 1675 scope.go:117] "RemoveContainer" containerID="3a79e9dcf5915e5b819c5265861608c5c7358b6ffd8d77c0797b639ed169deae" Dec 13 02:20:03.704501 env[1334]: time="2024-12-13T02:20:03.704436242Z" level=info msg="RemoveContainer for \"3a79e9dcf5915e5b819c5265861608c5c7358b6ffd8d77c0797b639ed169deae\"" Dec 13 02:20:03.708982 env[1334]: time="2024-12-13T02:20:03.708930503Z" level=info msg="RemoveContainer for \"3a79e9dcf5915e5b819c5265861608c5c7358b6ffd8d77c0797b639ed169deae\" returns successfully" Dec 13 02:20:03.709280 kubelet[1675]: I1213 02:20:03.709245 1675 scope.go:117] "RemoveContainer" containerID="e3d02551b699cfa033035f0688f81055ae131b92a3b1c29850976df94b0690d0" Dec 13 02:20:03.710816 env[1334]: time="2024-12-13T02:20:03.710778887Z" level=info msg="RemoveContainer for \"e3d02551b699cfa033035f0688f81055ae131b92a3b1c29850976df94b0690d0\"" Dec 13 02:20:03.714911 env[1334]: time="2024-12-13T02:20:03.714871375Z" level=info msg="RemoveContainer for \"e3d02551b699cfa033035f0688f81055ae131b92a3b1c29850976df94b0690d0\" returns successfully" Dec 13 02:20:03.715174 kubelet[1675]: I1213 02:20:03.715147 1675 scope.go:117] "RemoveContainer" containerID="3766927637267af642b06951243edd9b73a20060e3eb4f34f420379dfa27d5c0" Dec 13 02:20:03.716778 env[1334]: time="2024-12-13T02:20:03.716741540Z" level=info msg="RemoveContainer for \"3766927637267af642b06951243edd9b73a20060e3eb4f34f420379dfa27d5c0\"" Dec 13 02:20:03.720918 env[1334]: time="2024-12-13T02:20:03.720873449Z" level=info msg="RemoveContainer for \"3766927637267af642b06951243edd9b73a20060e3eb4f34f420379dfa27d5c0\" returns successfully" Dec 13 02:20:03.721113 kubelet[1675]: I1213 02:20:03.721085 1675 scope.go:117] "RemoveContainer" containerID="dd11278fc2255f282e909e8a91011172c422bbeea62531853f11f1caee550a18" Dec 13 02:20:03.723437 env[1334]: time="2024-12-13T02:20:03.723392610Z" level=info msg="RemoveContainer for \"dd11278fc2255f282e909e8a91011172c422bbeea62531853f11f1caee550a18\"" Dec 13 02:20:03.728352 env[1334]: time="2024-12-13T02:20:03.728306788Z" level=info msg="RemoveContainer for \"dd11278fc2255f282e909e8a91011172c422bbeea62531853f11f1caee550a18\" returns successfully" Dec 13 02:20:04.124346 kubelet[1675]: I1213 02:20:04.124287 1675 topology_manager.go:215] "Topology Admit Handler" podUID="155a105d-df7b-4b6a-87f8-765befe27114" podNamespace="kube-system" podName="cilium-operator-5cc964979-65cjx" Dec 13 02:20:04.124346 kubelet[1675]: E1213 02:20:04.124360 1675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" containerName="mount-cgroup" Dec 13 02:20:04.124697 kubelet[1675]: E1213 02:20:04.124377 1675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" containerName="apply-sysctl-overwrites" Dec 13 02:20:04.124697 kubelet[1675]: E1213 02:20:04.124388 1675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" containerName="cilium-agent" Dec 13 02:20:04.124697 kubelet[1675]: E1213 02:20:04.124399 1675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" containerName="mount-bpf-fs" Dec 13 02:20:04.124697 kubelet[1675]: E1213 02:20:04.124412 1675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" containerName="clean-cilium-state" Dec 13 02:20:04.124697 kubelet[1675]: I1213 02:20:04.124440 1675 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" containerName="cilium-agent" Dec 13 02:20:04.155041 kubelet[1675]: I1213 02:20:04.154986 1675 topology_manager.go:215] "Topology Admit Handler" podUID="11753e67-c9d6-4064-ba4b-d9c480cd8226" podNamespace="kube-system" podName="cilium-llfg2" Dec 13 02:20:04.307027 kubelet[1675]: I1213 02:20:04.306986 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/155a105d-df7b-4b6a-87f8-765befe27114-cilium-config-path\") pod \"cilium-operator-5cc964979-65cjx\" (UID: \"155a105d-df7b-4b6a-87f8-765befe27114\") " pod="kube-system/cilium-operator-5cc964979-65cjx" Dec 13 02:20:04.307249 kubelet[1675]: I1213 02:20:04.307049 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-bpf-maps\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307249 kubelet[1675]: I1213 02:20:04.307090 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-hostproc\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307249 kubelet[1675]: I1213 02:20:04.307124 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cni-path\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307249 kubelet[1675]: I1213 02:20:04.307157 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11753e67-c9d6-4064-ba4b-d9c480cd8226-hubble-tls\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307249 kubelet[1675]: I1213 02:20:04.307186 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-run\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307249 kubelet[1675]: I1213 02:20:04.307219 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-xtables-lock\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307621 kubelet[1675]: I1213 02:20:04.307253 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-config-path\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307621 kubelet[1675]: I1213 02:20:04.307289 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6kmm\" (UniqueName: \"kubernetes.io/projected/11753e67-c9d6-4064-ba4b-d9c480cd8226-kube-api-access-z6kmm\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307621 kubelet[1675]: I1213 02:20:04.307328 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwcvp\" (UniqueName: \"kubernetes.io/projected/155a105d-df7b-4b6a-87f8-765befe27114-kube-api-access-hwcvp\") pod \"cilium-operator-5cc964979-65cjx\" (UID: \"155a105d-df7b-4b6a-87f8-765befe27114\") " pod="kube-system/cilium-operator-5cc964979-65cjx" Dec 13 02:20:04.307621 kubelet[1675]: I1213 02:20:04.307364 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-etc-cni-netd\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307621 kubelet[1675]: I1213 02:20:04.307404 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11753e67-c9d6-4064-ba4b-d9c480cd8226-clustermesh-secrets\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307904 kubelet[1675]: I1213 02:20:04.307459 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-lib-modules\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307904 kubelet[1675]: I1213 02:20:04.307496 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-cgroup\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307904 kubelet[1675]: I1213 02:20:04.307532 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-ipsec-secrets\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307904 kubelet[1675]: I1213 02:20:04.307586 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-host-proc-sys-net\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.307904 kubelet[1675]: I1213 02:20:04.307623 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-host-proc-sys-kernel\") pod \"cilium-llfg2\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " pod="kube-system/cilium-llfg2" Dec 13 02:20:04.325323 kubelet[1675]: E1213 02:20:04.325264 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:04.482236 kubelet[1675]: I1213 02:20:04.482183 1675 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="75b273d3-cac7-4e6e-b2b4-c6cdb2de2538" path="/var/lib/kubelet/pods/75b273d3-cac7-4e6e-b2b4-c6cdb2de2538/volumes" Dec 13 02:20:04.729259 env[1334]: time="2024-12-13T02:20:04.729185851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-65cjx,Uid:155a105d-df7b-4b6a-87f8-765befe27114,Namespace:kube-system,Attempt:0,}" Dec 13 02:20:04.748073 env[1334]: time="2024-12-13T02:20:04.747896361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:20:04.748073 env[1334]: time="2024-12-13T02:20:04.747952380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:20:04.748073 env[1334]: time="2024-12-13T02:20:04.747972203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:20:04.748815 env[1334]: time="2024-12-13T02:20:04.748744252Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d06ca339496c2c489b0be0147e8ac62832fd59a2df493b3c1b0e7188916c12c pid=3198 runtime=io.containerd.runc.v2 Dec 13 02:20:04.759852 env[1334]: time="2024-12-13T02:20:04.759801106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-llfg2,Uid:11753e67-c9d6-4064-ba4b-d9c480cd8226,Namespace:kube-system,Attempt:0,}" Dec 13 02:20:04.787295 env[1334]: time="2024-12-13T02:20:04.787188812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:20:04.787522 env[1334]: time="2024-12-13T02:20:04.787316620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:20:04.787646 env[1334]: time="2024-12-13T02:20:04.787596622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:20:04.788905 env[1334]: time="2024-12-13T02:20:04.788066866Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42 pid=3229 runtime=io.containerd.runc.v2 Dec 13 02:20:04.859020 env[1334]: time="2024-12-13T02:20:04.858970710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-65cjx,Uid:155a105d-df7b-4b6a-87f8-765befe27114,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d06ca339496c2c489b0be0147e8ac62832fd59a2df493b3c1b0e7188916c12c\"" Dec 13 02:20:04.861637 env[1334]: time="2024-12-13T02:20:04.861586580Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:20:04.862725 env[1334]: time="2024-12-13T02:20:04.862670528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-llfg2,Uid:11753e67-c9d6-4064-ba4b-d9c480cd8226,Namespace:kube-system,Attempt:0,} returns sandbox id \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\"" Dec 13 02:20:04.866419 env[1334]: time="2024-12-13T02:20:04.866380852Z" level=info msg="CreateContainer within sandbox \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:20:04.880301 env[1334]: time="2024-12-13T02:20:04.880266280Z" level=info msg="CreateContainer within sandbox \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d5307a01314dfdc3ff27a386c2154aff6c3f2e4c5e28f1010b64f91d77f5174a\"" Dec 13 02:20:04.880973 env[1334]: time="2024-12-13T02:20:04.880938899Z" level=info msg="StartContainer for \"d5307a01314dfdc3ff27a386c2154aff6c3f2e4c5e28f1010b64f91d77f5174a\"" Dec 13 02:20:04.949497 env[1334]: time="2024-12-13T02:20:04.948359830Z" level=info msg="StartContainer for \"d5307a01314dfdc3ff27a386c2154aff6c3f2e4c5e28f1010b64f91d77f5174a\" returns successfully" Dec 13 02:20:04.989711 env[1334]: time="2024-12-13T02:20:04.989636223Z" level=info msg="shim disconnected" id=d5307a01314dfdc3ff27a386c2154aff6c3f2e4c5e28f1010b64f91d77f5174a Dec 13 02:20:04.989711 env[1334]: time="2024-12-13T02:20:04.989702498Z" level=warning msg="cleaning up after shim disconnected" id=d5307a01314dfdc3ff27a386c2154aff6c3f2e4c5e28f1010b64f91d77f5174a namespace=k8s.io Dec 13 02:20:04.989711 env[1334]: time="2024-12-13T02:20:04.989717649Z" level=info msg="cleaning up dead shim" Dec 13 02:20:05.001816 env[1334]: time="2024-12-13T02:20:05.000948779Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3325 runtime=io.containerd.runc.v2\n" Dec 13 02:20:05.326345 kubelet[1675]: E1213 02:20:05.326191 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:05.699199 env[1334]: time="2024-12-13T02:20:05.699044950Z" level=info msg="StopPodSandbox for \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\"" Dec 13 02:20:05.699199 env[1334]: time="2024-12-13T02:20:05.699123397Z" level=info msg="Container to stop \"d5307a01314dfdc3ff27a386c2154aff6c3f2e4c5e28f1010b64f91d77f5174a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:20:05.703128 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42-shm.mount: Deactivated successfully. Dec 13 02:20:05.739288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42-rootfs.mount: Deactivated successfully. Dec 13 02:20:05.746807 env[1334]: time="2024-12-13T02:20:05.746744277Z" level=info msg="shim disconnected" id=99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42 Dec 13 02:20:05.748063 env[1334]: time="2024-12-13T02:20:05.748024916Z" level=warning msg="cleaning up after shim disconnected" id=99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42 namespace=k8s.io Dec 13 02:20:05.748063 env[1334]: time="2024-12-13T02:20:05.748057772Z" level=info msg="cleaning up dead shim" Dec 13 02:20:05.759569 env[1334]: time="2024-12-13T02:20:05.759518278Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3356 runtime=io.containerd.runc.v2\n" Dec 13 02:20:05.759947 env[1334]: time="2024-12-13T02:20:05.759907570Z" level=info msg="TearDown network for sandbox \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\" successfully" Dec 13 02:20:05.760056 env[1334]: time="2024-12-13T02:20:05.759946945Z" level=info msg="StopPodSandbox for \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\" returns successfully" Dec 13 02:20:05.924469 kubelet[1675]: I1213 02:20:05.924405 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-etc-cni-netd\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.924713 kubelet[1675]: I1213 02:20:05.924514 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11753e67-c9d6-4064-ba4b-d9c480cd8226-clustermesh-secrets\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.924713 kubelet[1675]: I1213 02:20:05.924558 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11753e67-c9d6-4064-ba4b-d9c480cd8226-hubble-tls\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.924713 kubelet[1675]: I1213 02:20:05.924584 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-lib-modules\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.924713 kubelet[1675]: I1213 02:20:05.924619 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-cgroup\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.924713 kubelet[1675]: I1213 02:20:05.924649 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-hostproc\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.924713 kubelet[1675]: I1213 02:20:05.924680 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-bpf-maps\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.925077 kubelet[1675]: I1213 02:20:05.924709 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-host-proc-sys-net\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.925077 kubelet[1675]: I1213 02:20:05.924738 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-xtables-lock\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.925077 kubelet[1675]: I1213 02:20:05.924774 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-ipsec-secrets\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.925077 kubelet[1675]: I1213 02:20:05.924811 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6kmm\" (UniqueName: \"kubernetes.io/projected/11753e67-c9d6-4064-ba4b-d9c480cd8226-kube-api-access-z6kmm\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.925077 kubelet[1675]: I1213 02:20:05.924840 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cni-path\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.925077 kubelet[1675]: I1213 02:20:05.924873 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-run\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.925384 kubelet[1675]: I1213 02:20:05.924911 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-config-path\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.925384 kubelet[1675]: I1213 02:20:05.924949 1675 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-host-proc-sys-kernel\") pod \"11753e67-c9d6-4064-ba4b-d9c480cd8226\" (UID: \"11753e67-c9d6-4064-ba4b-d9c480cd8226\") " Dec 13 02:20:05.925384 kubelet[1675]: I1213 02:20:05.925051 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:05.925384 kubelet[1675]: I1213 02:20:05.925100 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:05.925637 kubelet[1675]: I1213 02:20:05.925598 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:05.925703 kubelet[1675]: I1213 02:20:05.925654 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:05.926974 kubelet[1675]: I1213 02:20:05.926506 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:05.926974 kubelet[1675]: I1213 02:20:05.926568 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:05.926974 kubelet[1675]: I1213 02:20:05.926595 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-hostproc" (OuterVolumeSpecName: "hostproc") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:05.926974 kubelet[1675]: I1213 02:20:05.926620 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:05.939314 systemd[1]: var-lib-kubelet-pods-11753e67\x2dc9d6\x2d4064\x2dba4b\x2dd9c480cd8226-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:20:05.943968 kubelet[1675]: I1213 02:20:05.939406 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11753e67-c9d6-4064-ba4b-d9c480cd8226-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:20:05.943968 kubelet[1675]: I1213 02:20:05.939561 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11753e67-c9d6-4064-ba4b-d9c480cd8226-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:20:05.943968 kubelet[1675]: I1213 02:20:05.939635 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:20:05.943968 kubelet[1675]: I1213 02:20:05.939674 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:05.943968 kubelet[1675]: I1213 02:20:05.939703 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cni-path" (OuterVolumeSpecName: "cni-path") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:20:05.944290 kubelet[1675]: I1213 02:20:05.942806 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:20:05.944290 kubelet[1675]: I1213 02:20:05.943665 1675 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11753e67-c9d6-4064-ba4b-d9c480cd8226-kube-api-access-z6kmm" (OuterVolumeSpecName: "kube-api-access-z6kmm") pod "11753e67-c9d6-4064-ba4b-d9c480cd8226" (UID: "11753e67-c9d6-4064-ba4b-d9c480cd8226"). InnerVolumeSpecName "kube-api-access-z6kmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:20:05.945998 systemd[1]: var-lib-kubelet-pods-11753e67\x2dc9d6\x2d4064\x2dba4b\x2dd9c480cd8226-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz6kmm.mount: Deactivated successfully. Dec 13 02:20:05.946204 systemd[1]: var-lib-kubelet-pods-11753e67\x2dc9d6\x2d4064\x2dba4b\x2dd9c480cd8226-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:20:05.946376 systemd[1]: var-lib-kubelet-pods-11753e67\x2dc9d6\x2d4064\x2dba4b\x2dd9c480cd8226-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:20:06.025263 kubelet[1675]: I1213 02:20:06.025192 1675 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-host-proc-sys-kernel\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025263 kubelet[1675]: I1213 02:20:06.025246 1675 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-config-path\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025263 kubelet[1675]: I1213 02:20:06.025265 1675 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-etc-cni-netd\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025628 kubelet[1675]: I1213 02:20:06.025281 1675 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11753e67-c9d6-4064-ba4b-d9c480cd8226-clustermesh-secrets\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025628 kubelet[1675]: I1213 02:20:06.025300 1675 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11753e67-c9d6-4064-ba4b-d9c480cd8226-hubble-tls\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025628 kubelet[1675]: I1213 02:20:06.025316 1675 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-lib-modules\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025628 kubelet[1675]: I1213 02:20:06.025330 1675 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-cgroup\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025628 kubelet[1675]: I1213 02:20:06.025345 1675 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-hostproc\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025628 kubelet[1675]: I1213 02:20:06.025360 1675 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-bpf-maps\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025628 kubelet[1675]: I1213 02:20:06.025379 1675 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-host-proc-sys-net\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025628 kubelet[1675]: I1213 02:20:06.025396 1675 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-xtables-lock\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025904 kubelet[1675]: I1213 02:20:06.025411 1675 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z6kmm\" (UniqueName: \"kubernetes.io/projected/11753e67-c9d6-4064-ba4b-d9c480cd8226-kube-api-access-z6kmm\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025904 kubelet[1675]: I1213 02:20:06.025426 1675 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-ipsec-secrets\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025904 kubelet[1675]: I1213 02:20:06.025467 1675 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cni-path\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.025904 kubelet[1675]: I1213 02:20:06.025507 1675 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11753e67-c9d6-4064-ba4b-d9c480cd8226-cilium-run\") on node \"10.128.0.79\" DevicePath \"\"" Dec 13 02:20:06.278307 kubelet[1675]: E1213 02:20:06.278154 1675 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:06.325810 kubelet[1675]: I1213 02:20:06.325776 1675 scope.go:117] "RemoveContainer" containerID="d5307a01314dfdc3ff27a386c2154aff6c3f2e4c5e28f1010b64f91d77f5174a" Dec 13 02:20:06.326476 kubelet[1675]: E1213 02:20:06.326425 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:06.327576 env[1334]: time="2024-12-13T02:20:06.327525513Z" level=info msg="RemoveContainer for \"d5307a01314dfdc3ff27a386c2154aff6c3f2e4c5e28f1010b64f91d77f5174a\"" Dec 13 02:20:06.331863 env[1334]: time="2024-12-13T02:20:06.331816261Z" level=info msg="RemoveContainer for \"d5307a01314dfdc3ff27a386c2154aff6c3f2e4c5e28f1010b64f91d77f5174a\" returns successfully" Dec 13 02:20:06.333207 env[1334]: time="2024-12-13T02:20:06.333171013Z" level=info msg="StopPodSandbox for \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\"" Dec 13 02:20:06.333523 env[1334]: time="2024-12-13T02:20:06.333434769Z" level=info msg="TearDown network for sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" successfully" Dec 13 02:20:06.333523 env[1334]: time="2024-12-13T02:20:06.333516924Z" level=info msg="StopPodSandbox for \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" returns successfully" Dec 13 02:20:06.334091 env[1334]: time="2024-12-13T02:20:06.334052732Z" level=info msg="RemovePodSandbox for \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\"" Dec 13 02:20:06.334212 env[1334]: time="2024-12-13T02:20:06.334093734Z" level=info msg="Forcibly stopping sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\"" Dec 13 02:20:06.334212 env[1334]: time="2024-12-13T02:20:06.334196832Z" level=info msg="TearDown network for sandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" successfully" Dec 13 02:20:06.337922 env[1334]: time="2024-12-13T02:20:06.337880884Z" level=info msg="RemovePodSandbox \"250e4c4be1e581394bcaf646e503a9d5583ddce16356189fac549ec161472e1f\" returns successfully" Dec 13 02:20:06.338435 env[1334]: time="2024-12-13T02:20:06.338396382Z" level=info msg="StopPodSandbox for \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\"" Dec 13 02:20:06.338602 env[1334]: time="2024-12-13T02:20:06.338546819Z" level=info msg="TearDown network for sandbox \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\" successfully" Dec 13 02:20:06.338698 env[1334]: time="2024-12-13T02:20:06.338604015Z" level=info msg="StopPodSandbox for \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\" returns successfully" Dec 13 02:20:06.339067 env[1334]: time="2024-12-13T02:20:06.339021566Z" level=info msg="RemovePodSandbox for \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\"" Dec 13 02:20:06.339187 env[1334]: time="2024-12-13T02:20:06.339058798Z" level=info msg="Forcibly stopping sandbox \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\"" Dec 13 02:20:06.339187 env[1334]: time="2024-12-13T02:20:06.339153745Z" level=info msg="TearDown network for sandbox \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\" successfully" Dec 13 02:20:06.342746 env[1334]: time="2024-12-13T02:20:06.342690830Z" level=info msg="RemovePodSandbox \"99f8e8dfb75620880f5b4801a6cea0e1e6bff8e9a88bb547635cc98c98da5c42\" returns successfully" Dec 13 02:20:06.404951 kubelet[1675]: E1213 02:20:06.404898 1675 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:20:06.790145 kubelet[1675]: I1213 02:20:06.790097 1675 topology_manager.go:215] "Topology Admit Handler" podUID="15a887dd-271b-46c7-bb23-5a6862c50b63" podNamespace="kube-system" podName="cilium-jzhkj" Dec 13 02:20:06.790384 kubelet[1675]: E1213 02:20:06.790173 1675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="11753e67-c9d6-4064-ba4b-d9c480cd8226" containerName="mount-cgroup" Dec 13 02:20:06.790384 kubelet[1675]: I1213 02:20:06.790211 1675 memory_manager.go:354] "RemoveStaleState removing state" podUID="11753e67-c9d6-4064-ba4b-d9c480cd8226" containerName="mount-cgroup" Dec 13 02:20:06.929638 kubelet[1675]: I1213 02:20:06.929572 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15a887dd-271b-46c7-bb23-5a6862c50b63-xtables-lock\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.929638 kubelet[1675]: I1213 02:20:06.929643 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15a887dd-271b-46c7-bb23-5a6862c50b63-hostproc\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.929937 kubelet[1675]: I1213 02:20:06.929677 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15a887dd-271b-46c7-bb23-5a6862c50b63-cilium-cgroup\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.929937 kubelet[1675]: I1213 02:20:06.929709 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15a887dd-271b-46c7-bb23-5a6862c50b63-cni-path\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.929937 kubelet[1675]: I1213 02:20:06.929747 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15a887dd-271b-46c7-bb23-5a6862c50b63-cilium-run\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.929937 kubelet[1675]: I1213 02:20:06.929783 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15a887dd-271b-46c7-bb23-5a6862c50b63-host-proc-sys-kernel\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.930164 kubelet[1675]: I1213 02:20:06.929982 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15a887dd-271b-46c7-bb23-5a6862c50b63-host-proc-sys-net\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.930164 kubelet[1675]: I1213 02:20:06.930023 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4hk\" (UniqueName: \"kubernetes.io/projected/15a887dd-271b-46c7-bb23-5a6862c50b63-kube-api-access-dk4hk\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.930164 kubelet[1675]: I1213 02:20:06.930061 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/15a887dd-271b-46c7-bb23-5a6862c50b63-cilium-ipsec-secrets\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.930164 kubelet[1675]: I1213 02:20:06.930098 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15a887dd-271b-46c7-bb23-5a6862c50b63-hubble-tls\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.930164 kubelet[1675]: I1213 02:20:06.930136 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15a887dd-271b-46c7-bb23-5a6862c50b63-lib-modules\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.930435 kubelet[1675]: I1213 02:20:06.930171 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15a887dd-271b-46c7-bb23-5a6862c50b63-cilium-config-path\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.930435 kubelet[1675]: I1213 02:20:06.930207 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15a887dd-271b-46c7-bb23-5a6862c50b63-bpf-maps\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.930435 kubelet[1675]: I1213 02:20:06.930239 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15a887dd-271b-46c7-bb23-5a6862c50b63-etc-cni-netd\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:06.930435 kubelet[1675]: I1213 02:20:06.930274 1675 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15a887dd-271b-46c7-bb23-5a6862c50b63-clustermesh-secrets\") pod \"cilium-jzhkj\" (UID: \"15a887dd-271b-46c7-bb23-5a6862c50b63\") " pod="kube-system/cilium-jzhkj" Dec 13 02:20:07.096316 env[1334]: time="2024-12-13T02:20:07.095275840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jzhkj,Uid:15a887dd-271b-46c7-bb23-5a6862c50b63,Namespace:kube-system,Attempt:0,}" Dec 13 02:20:07.116799 env[1334]: time="2024-12-13T02:20:07.116711649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:20:07.116799 env[1334]: time="2024-12-13T02:20:07.116768996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:20:07.117106 env[1334]: time="2024-12-13T02:20:07.117047419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:20:07.117482 env[1334]: time="2024-12-13T02:20:07.117402170Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74 pid=3386 runtime=io.containerd.runc.v2 Dec 13 02:20:07.170588 env[1334]: time="2024-12-13T02:20:07.170441162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jzhkj,Uid:15a887dd-271b-46c7-bb23-5a6862c50b63,Namespace:kube-system,Attempt:0,} returns sandbox id \"896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74\"" Dec 13 02:20:07.175094 env[1334]: time="2024-12-13T02:20:07.175037954Z" level=info msg="CreateContainer within sandbox \"896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:20:07.196169 env[1334]: time="2024-12-13T02:20:07.196106849Z" level=info msg="CreateContainer within sandbox \"896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8d3593855a21e0bfaa9b5b586e09e9502343135113d54cfa500aeb23cf3bd33f\"" Dec 13 02:20:07.197020 env[1334]: time="2024-12-13T02:20:07.196947281Z" level=info msg="StartContainer for \"8d3593855a21e0bfaa9b5b586e09e9502343135113d54cfa500aeb23cf3bd33f\"" Dec 13 02:20:07.263682 kubelet[1675]: I1213 02:20:07.263645 1675 setters.go:568] "Node became not ready" node="10.128.0.79" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:20:07Z","lastTransitionTime":"2024-12-13T02:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:20:07.293715 env[1334]: time="2024-12-13T02:20:07.293660497Z" level=info msg="StartContainer for \"8d3593855a21e0bfaa9b5b586e09e9502343135113d54cfa500aeb23cf3bd33f\" returns successfully" Dec 13 02:20:07.327347 kubelet[1675]: E1213 02:20:07.327289 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:07.378397 env[1334]: time="2024-12-13T02:20:07.377717592Z" level=info msg="shim disconnected" id=8d3593855a21e0bfaa9b5b586e09e9502343135113d54cfa500aeb23cf3bd33f Dec 13 02:20:07.378397 env[1334]: time="2024-12-13T02:20:07.377776988Z" level=warning msg="cleaning up after shim disconnected" id=8d3593855a21e0bfaa9b5b586e09e9502343135113d54cfa500aeb23cf3bd33f namespace=k8s.io Dec 13 02:20:07.378397 env[1334]: time="2024-12-13T02:20:07.377790345Z" level=info msg="cleaning up dead shim" Dec 13 02:20:07.395693 env[1334]: time="2024-12-13T02:20:07.395640671Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3472 runtime=io.containerd.runc.v2\n" Dec 13 02:20:07.709731 env[1334]: time="2024-12-13T02:20:07.709342052Z" level=info msg="CreateContainer within sandbox \"896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:20:07.735021 env[1334]: time="2024-12-13T02:20:07.734961508Z" level=info msg="CreateContainer within sandbox \"896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7ef938d3305f6262b15b4b37ac8077d4fe4c49629b34cc3a9cd6532b0427cdd5\"" Dec 13 02:20:07.736169 env[1334]: time="2024-12-13T02:20:07.736131540Z" level=info msg="StartContainer for \"7ef938d3305f6262b15b4b37ac8077d4fe4c49629b34cc3a9cd6532b0427cdd5\"" Dec 13 02:20:07.835488 env[1334]: time="2024-12-13T02:20:07.835405031Z" level=info msg="StartContainer for \"7ef938d3305f6262b15b4b37ac8077d4fe4c49629b34cc3a9cd6532b0427cdd5\" returns successfully" Dec 13 02:20:07.905109 env[1334]: time="2024-12-13T02:20:07.905048410Z" level=info msg="shim disconnected" id=7ef938d3305f6262b15b4b37ac8077d4fe4c49629b34cc3a9cd6532b0427cdd5 Dec 13 02:20:07.905575 env[1334]: time="2024-12-13T02:20:07.905543159Z" level=warning msg="cleaning up after shim disconnected" id=7ef938d3305f6262b15b4b37ac8077d4fe4c49629b34cc3a9cd6532b0427cdd5 namespace=k8s.io Dec 13 02:20:07.905754 env[1334]: time="2024-12-13T02:20:07.905732382Z" level=info msg="cleaning up dead shim" Dec 13 02:20:07.930757 env[1334]: time="2024-12-13T02:20:07.930701240Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3533 runtime=io.containerd.runc.v2\n" Dec 13 02:20:08.328077 kubelet[1675]: E1213 02:20:08.327985 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:08.476358 env[1334]: time="2024-12-13T02:20:08.476286436Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:08.480681 env[1334]: time="2024-12-13T02:20:08.479868331Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:08.482332 env[1334]: time="2024-12-13T02:20:08.482287819Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:08.482985 env[1334]: time="2024-12-13T02:20:08.482936617Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:20:08.483428 kubelet[1675]: I1213 02:20:08.483386 1675 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="11753e67-c9d6-4064-ba4b-d9c480cd8226" path="/var/lib/kubelet/pods/11753e67-c9d6-4064-ba4b-d9c480cd8226/volumes" Dec 13 02:20:08.486162 env[1334]: time="2024-12-13T02:20:08.486107729Z" level=info msg="CreateContainer within sandbox \"7d06ca339496c2c489b0be0147e8ac62832fd59a2df493b3c1b0e7188916c12c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:20:08.503863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1270552046.mount: Deactivated successfully. Dec 13 02:20:08.512963 env[1334]: time="2024-12-13T02:20:08.512904182Z" level=info msg="CreateContainer within sandbox \"7d06ca339496c2c489b0be0147e8ac62832fd59a2df493b3c1b0e7188916c12c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1972615354b69fb5de3f070bfee5de91f4b05ee60334dd40018a550b77e1a61d\"" Dec 13 02:20:08.513769 env[1334]: time="2024-12-13T02:20:08.513716999Z" level=info msg="StartContainer for \"1972615354b69fb5de3f070bfee5de91f4b05ee60334dd40018a550b77e1a61d\"" Dec 13 02:20:08.596428 env[1334]: time="2024-12-13T02:20:08.589915267Z" level=info msg="StartContainer for \"1972615354b69fb5de3f070bfee5de91f4b05ee60334dd40018a550b77e1a61d\" returns successfully" Dec 13 02:20:08.714852 env[1334]: time="2024-12-13T02:20:08.714797579Z" level=info msg="CreateContainer within sandbox \"896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:20:08.741856 env[1334]: time="2024-12-13T02:20:08.741787282Z" level=info msg="CreateContainer within sandbox \"896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2814b371f8979b97a4c969f2907ccfc6436b6b6a262a7a102ed6fdb1d1bcd9dd\"" Dec 13 02:20:08.742794 env[1334]: time="2024-12-13T02:20:08.742734323Z" level=info msg="StartContainer for \"2814b371f8979b97a4c969f2907ccfc6436b6b6a262a7a102ed6fdb1d1bcd9dd\"" Dec 13 02:20:08.781735 kubelet[1675]: I1213 02:20:08.781651 1675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-65cjx" podStartSLOduration=1.15835029 podStartE2EDuration="4.781567903s" podCreationTimestamp="2024-12-13 02:20:04 +0000 UTC" firstStartedPulling="2024-12-13 02:20:04.860904728 +0000 UTC m=+58.983808403" lastFinishedPulling="2024-12-13 02:20:08.484122341 +0000 UTC m=+62.607026016" observedRunningTime="2024-12-13 02:20:08.72590114 +0000 UTC m=+62.848804853" watchObservedRunningTime="2024-12-13 02:20:08.781567903 +0000 UTC m=+62.904471608" Dec 13 02:20:08.825071 env[1334]: time="2024-12-13T02:20:08.825006869Z" level=info msg="StartContainer for \"2814b371f8979b97a4c969f2907ccfc6436b6b6a262a7a102ed6fdb1d1bcd9dd\" returns successfully" Dec 13 02:20:08.987049 env[1334]: time="2024-12-13T02:20:08.986980418Z" level=info msg="shim disconnected" id=2814b371f8979b97a4c969f2907ccfc6436b6b6a262a7a102ed6fdb1d1bcd9dd Dec 13 02:20:08.987049 env[1334]: time="2024-12-13T02:20:08.987046890Z" level=warning msg="cleaning up after shim disconnected" id=2814b371f8979b97a4c969f2907ccfc6436b6b6a262a7a102ed6fdb1d1bcd9dd namespace=k8s.io Dec 13 02:20:08.987049 env[1334]: time="2024-12-13T02:20:08.987060923Z" level=info msg="cleaning up dead shim" Dec 13 02:20:08.999349 env[1334]: time="2024-12-13T02:20:08.999280246Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3629 runtime=io.containerd.runc.v2\n" Dec 13 02:20:09.329228 kubelet[1675]: E1213 02:20:09.329078 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:09.719720 env[1334]: time="2024-12-13T02:20:09.719534373Z" level=info msg="CreateContainer within sandbox \"896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:20:09.740145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638249649.mount: Deactivated successfully. Dec 13 02:20:09.751642 env[1334]: time="2024-12-13T02:20:09.751585760Z" level=info msg="CreateContainer within sandbox \"896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"818bb92476c3430397e7d213f7401681db07ce889942c095ee412b394900ddc4\"" Dec 13 02:20:09.752258 env[1334]: time="2024-12-13T02:20:09.752214091Z" level=info msg="StartContainer for \"818bb92476c3430397e7d213f7401681db07ce889942c095ee412b394900ddc4\"" Dec 13 02:20:09.816815 env[1334]: time="2024-12-13T02:20:09.816763635Z" level=info msg="StartContainer for \"818bb92476c3430397e7d213f7401681db07ce889942c095ee412b394900ddc4\" returns successfully" Dec 13 02:20:09.841573 env[1334]: time="2024-12-13T02:20:09.841513808Z" level=info msg="shim disconnected" id=818bb92476c3430397e7d213f7401681db07ce889942c095ee412b394900ddc4 Dec 13 02:20:09.841893 env[1334]: time="2024-12-13T02:20:09.841855380Z" level=warning msg="cleaning up after shim disconnected" id=818bb92476c3430397e7d213f7401681db07ce889942c095ee412b394900ddc4 namespace=k8s.io Dec 13 02:20:09.841893 env[1334]: time="2024-12-13T02:20:09.841888949Z" level=info msg="cleaning up dead shim" Dec 13 02:20:09.853224 env[1334]: time="2024-12-13T02:20:09.853172087Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3687 runtime=io.containerd.runc.v2\n" Dec 13 02:20:10.039377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-818bb92476c3430397e7d213f7401681db07ce889942c095ee412b394900ddc4-rootfs.mount: Deactivated successfully. Dec 13 02:20:10.329686 kubelet[1675]: E1213 02:20:10.329529 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:10.725347 env[1334]: time="2024-12-13T02:20:10.725296928Z" level=info msg="CreateContainer within sandbox \"896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:20:10.749229 env[1334]: time="2024-12-13T02:20:10.749158024Z" level=info msg="CreateContainer within sandbox \"896b97f0d18578addd583e5d6dcb414b091d69825644fa7caad654b15b1b2f74\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5b93637eb7307b85273afbb397a64c376439269d3faa20fdae11aea1f1eca82b\"" Dec 13 02:20:10.749982 env[1334]: time="2024-12-13T02:20:10.749938764Z" level=info msg="StartContainer for \"5b93637eb7307b85273afbb397a64c376439269d3faa20fdae11aea1f1eca82b\"" Dec 13 02:20:10.848705 env[1334]: time="2024-12-13T02:20:10.843793141Z" level=info msg="StartContainer for \"5b93637eb7307b85273afbb397a64c376439269d3faa20fdae11aea1f1eca82b\" returns successfully" Dec 13 02:20:11.041220 systemd[1]: run-containerd-runc-k8s.io-5b93637eb7307b85273afbb397a64c376439269d3faa20fdae11aea1f1eca82b-runc.xlMBUS.mount: Deactivated successfully. Dec 13 02:20:11.286495 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:20:11.330054 kubelet[1675]: E1213 02:20:11.329908 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:11.744918 kubelet[1675]: I1213 02:20:11.744871 1675 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jzhkj" podStartSLOduration=5.744821587 podStartE2EDuration="5.744821587s" podCreationTimestamp="2024-12-13 02:20:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:20:11.744512954 +0000 UTC m=+65.867416639" watchObservedRunningTime="2024-12-13 02:20:11.744821587 +0000 UTC m=+65.867725274" Dec 13 02:20:12.331017 kubelet[1675]: E1213 02:20:12.330949 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:13.332010 kubelet[1675]: E1213 02:20:13.331965 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:14.337779 kubelet[1675]: E1213 02:20:14.333643 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:14.338585 systemd-networkd[1088]: lxc_health: Link UP Dec 13 02:20:14.356552 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:20:14.356844 systemd-networkd[1088]: lxc_health: Gained carrier Dec 13 02:20:15.334788 kubelet[1675]: E1213 02:20:15.334740 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:16.065481 systemd[1]: run-containerd-runc-k8s.io-5b93637eb7307b85273afbb397a64c376439269d3faa20fdae11aea1f1eca82b-runc.Ds6eR6.mount: Deactivated successfully. Dec 13 02:20:16.081716 systemd-networkd[1088]: lxc_health: Gained IPv6LL Dec 13 02:20:16.335628 kubelet[1675]: E1213 02:20:16.335472 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:17.335776 kubelet[1675]: E1213 02:20:17.335716 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:18.337674 kubelet[1675]: E1213 02:20:18.337592 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:18.523617 systemd[1]: run-containerd-runc-k8s.io-5b93637eb7307b85273afbb397a64c376439269d3faa20fdae11aea1f1eca82b-runc.rd3zr4.mount: Deactivated successfully. Dec 13 02:20:19.339401 kubelet[1675]: E1213 02:20:19.339272 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:20.340250 kubelet[1675]: E1213 02:20:20.340183 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:20.728930 systemd[1]: run-containerd-runc-k8s.io-5b93637eb7307b85273afbb397a64c376439269d3faa20fdae11aea1f1eca82b-runc.m0S07a.mount: Deactivated successfully. Dec 13 02:20:21.340824 kubelet[1675]: E1213 02:20:21.340759 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:22.341240 kubelet[1675]: E1213 02:20:22.341166 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:23.342417 kubelet[1675]: E1213 02:20:23.342348 1675 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"