Feb 9 19:27:30.136825 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:27:30.136864 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:27:30.136880 kernel: BIOS-provided physical RAM map: Feb 9 19:27:30.136893 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 9 19:27:30.136905 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 9 19:27:30.136917 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 9 19:27:30.136935 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 9 19:27:30.136948 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 9 19:27:30.136960 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Feb 9 19:27:30.136973 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 9 19:27:30.136986 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 9 19:27:30.136998 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 9 19:27:30.137010 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 9 19:27:30.137023 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 9 19:27:30.137043 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 9 19:27:30.137056 kernel: NX (Execute Disable) protection: active Feb 9 19:27:30.137070 kernel: efi: EFI v2.70 by EDK II Feb 9 19:27:30.137084 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbe379198 RNG=0xbfb73018 TPMEventLog=0xbe2bd018 Feb 9 19:27:30.137098 kernel: random: crng init done Feb 9 19:27:30.137111 kernel: SMBIOS 2.4 present. Feb 9 19:27:30.137125 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023 Feb 9 19:27:30.137138 kernel: Hypervisor detected: KVM Feb 9 19:27:30.137156 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:27:30.137177 kernel: kvm-clock: cpu 0, msr 212faa001, primary cpu clock Feb 9 19:27:30.137191 kernel: kvm-clock: using sched offset of 13608637137 cycles Feb 9 19:27:30.137209 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:27:30.137224 kernel: tsc: Detected 2299.998 MHz processor Feb 9 19:27:30.137238 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:27:30.137252 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:27:30.137266 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 9 19:27:30.137280 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:27:30.137294 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 9 19:27:30.137312 kernel: Using GB pages for direct mapping Feb 9 19:27:30.137325 kernel: Secure boot disabled Feb 9 19:27:30.137340 kernel: ACPI: Early table checksum verification disabled Feb 9 19:27:30.137353 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 9 19:27:30.137367 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 9 19:27:30.137381 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 9 19:27:30.137395 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 9 19:27:30.137410 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 9 19:27:30.137433 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Feb 9 19:27:30.137448 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 9 19:27:30.137462 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 9 19:27:30.137475 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 9 19:27:30.137488 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 9 19:27:30.137502 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 9 19:27:30.137521 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 9 19:27:30.137537 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 9 19:27:30.137552 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 9 19:27:30.137567 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 9 19:27:30.137581 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 9 19:27:30.137595 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 9 19:27:30.137609 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 9 19:27:30.137632 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 9 19:27:30.137647 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 9 19:27:30.137666 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:27:30.137680 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:27:30.137694 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 9 19:27:30.137708 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 9 19:27:30.137723 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 9 19:27:30.137739 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 9 19:27:30.137753 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 9 19:27:30.137806 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Feb 9 19:27:30.137822 kernel: Zone ranges: Feb 9 19:27:30.137840 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:27:30.137854 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:27:30.137867 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 9 19:27:30.137881 kernel: Movable zone start for each node Feb 9 19:27:30.137895 kernel: Early memory node ranges Feb 9 19:27:30.137910 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 9 19:27:30.137924 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 9 19:27:30.137939 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Feb 9 19:27:30.137953 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 9 19:27:30.137972 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 9 19:27:30.137987 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 9 19:27:30.138002 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:27:30.138017 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 9 19:27:30.138031 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 9 19:27:30.138044 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 9 19:27:30.138059 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 9 19:27:30.138073 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 19:27:30.138089 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:27:30.138107 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:27:30.138121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:27:30.138135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:27:30.138150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:27:30.138164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:27:30.138179 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:27:30.138194 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:27:30.138208 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 9 19:27:30.138222 kernel: Booting paravirtualized kernel on KVM Feb 9 19:27:30.138241 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:27:30.138256 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:27:30.138271 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:27:30.138287 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:27:30.138300 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:27:30.138313 kernel: kvm-guest: PV spinlocks enabled Feb 9 19:27:30.138327 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:27:30.138342 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1931256 Feb 9 19:27:30.138357 kernel: Policy zone: Normal Feb 9 19:27:30.138379 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:27:30.138395 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:27:30.138411 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:27:30.138425 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:27:30.138440 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:27:30.138456 kernel: Memory: 7536516K/7860584K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 323808K reserved, 0K cma-reserved) Feb 9 19:27:30.138471 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:27:30.138486 kernel: Kernel/User page tables isolation: enabled Feb 9 19:27:30.138505 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:27:30.138520 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:27:30.138536 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:27:30.138552 kernel: rcu: RCU event tracing is enabled. Feb 9 19:27:30.138568 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:27:30.138583 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:27:30.138598 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:27:30.138613 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:27:30.138636 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:27:30.138656 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 19:27:30.138683 kernel: Console: colour dummy device 80x25 Feb 9 19:27:30.138699 kernel: printk: console [ttyS0] enabled Feb 9 19:27:30.138719 kernel: ACPI: Core revision 20210730 Feb 9 19:27:30.138736 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:27:30.138751 kernel: x2apic enabled Feb 9 19:27:30.138781 kernel: Switched APIC routing to physical x2apic. Feb 9 19:27:30.138797 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 9 19:27:30.138814 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 9 19:27:30.138830 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 9 19:27:30.138850 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 9 19:27:30.138866 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 9 19:27:30.138882 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:27:30.138899 kernel: Spectre V2 : Mitigation: IBRS Feb 9 19:27:30.138915 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:27:30.138931 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:27:30.138950 kernel: RETBleed: Mitigation: IBRS Feb 9 19:27:30.138967 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:27:30.138982 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Feb 9 19:27:30.138999 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:27:30.139014 kernel: MDS: Mitigation: Clear CPU buffers Feb 9 19:27:30.139031 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:27:30.139047 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:27:30.139063 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:27:30.139079 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:27:30.139098 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:27:30.139114 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 19:27:30.139129 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:27:30.139145 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:27:30.139161 kernel: LSM: Security Framework initializing Feb 9 19:27:30.139177 kernel: SELinux: Initializing. Feb 9 19:27:30.139193 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:27:30.139209 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:27:30.139225 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 9 19:27:30.139245 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 9 19:27:30.139261 kernel: signal: max sigframe size: 1776 Feb 9 19:27:30.139277 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:27:30.139293 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:27:30.139309 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:27:30.139325 kernel: x86: Booting SMP configuration: Feb 9 19:27:30.139341 kernel: .... node #0, CPUs: #1 Feb 9 19:27:30.139357 kernel: kvm-clock: cpu 1, msr 212faa041, secondary cpu clock Feb 9 19:27:30.139374 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 9 19:27:30.139394 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:27:30.139410 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:27:30.139426 kernel: smpboot: Max logical packages: 1 Feb 9 19:27:30.139443 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 9 19:27:30.139458 kernel: devtmpfs: initialized Feb 9 19:27:30.139474 kernel: x86/mm: Memory block size: 128MB Feb 9 19:27:30.139490 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 9 19:27:30.139506 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:27:30.139522 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:27:30.139541 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:27:30.139556 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:27:30.139570 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:27:30.139583 kernel: audit: type=2000 audit(1707506849.138:1): state=initialized audit_enabled=0 res=1 Feb 9 19:27:30.139597 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:27:30.139612 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:27:30.139801 kernel: cpuidle: using governor menu Feb 9 19:27:30.139819 kernel: ACPI: bus type PCI registered Feb 9 19:27:30.139835 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:27:30.139993 kernel: dca service started, version 1.12.1 Feb 9 19:27:30.140012 kernel: PCI: Using configuration type 1 for base access Feb 9 19:27:30.140030 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:27:30.140046 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:27:30.140063 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:27:30.140080 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:27:30.140239 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:27:30.140262 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:27:30.140280 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:27:30.140306 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:27:30.140323 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:27:30.140478 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:27:30.140498 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 9 19:27:30.140526 kernel: ACPI: Interpreter enabled Feb 9 19:27:30.140544 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:27:30.140559 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:27:30.140575 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:27:30.140591 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 9 19:27:30.140613 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:27:30.140861 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:27:30.141036 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 19:27:30.141059 kernel: PCI host bridge to bus 0000:00 Feb 9 19:27:30.141222 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:27:30.141376 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:27:30.141527 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:27:30.141679 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 9 19:27:30.141851 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:27:30.142048 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:27:30.142247 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 9 19:27:30.142428 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:27:30.142593 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 19:27:30.145893 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 9 19:27:30.146349 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 9 19:27:30.146652 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 9 19:27:30.149247 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:27:30.149446 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 9 19:27:30.149625 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 9 19:27:30.149825 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:27:30.149998 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 19:27:30.150160 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 9 19:27:30.150183 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:27:30.150200 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:27:30.150217 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:27:30.150234 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:27:30.150252 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:27:30.150274 kernel: iommu: Default domain type: Translated Feb 9 19:27:30.150298 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:27:30.150315 kernel: vgaarb: loaded Feb 9 19:27:30.150333 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:27:30.150351 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:27:30.150368 kernel: PTP clock support registered Feb 9 19:27:30.150385 kernel: Registered efivars operations Feb 9 19:27:30.150403 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:27:30.150420 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:27:30.150442 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 9 19:27:30.150459 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 9 19:27:30.150475 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 9 19:27:30.150492 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 9 19:27:30.150509 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:27:30.150526 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:27:30.150544 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:27:30.150561 kernel: pnp: PnP ACPI init Feb 9 19:27:30.150578 kernel: pnp: PnP ACPI: found 7 devices Feb 9 19:27:30.150600 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:27:30.150625 kernel: NET: Registered PF_INET protocol family Feb 9 19:27:30.150642 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:27:30.150660 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:27:30.150677 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:27:30.150694 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:27:30.150711 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:27:30.150728 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:27:30.150746 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:27:30.150781 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:27:30.151039 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:27:30.151059 kernel: NET: Registered PF_XDP protocol family Feb 9 19:27:30.151413 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:27:30.151910 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:27:30.152158 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:27:30.152300 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 9 19:27:30.152466 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:27:30.152496 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:27:30.152514 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:27:30.152532 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Feb 9 19:27:30.152549 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:27:30.152567 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 9 19:27:30.152585 kernel: clocksource: Switched to clocksource tsc Feb 9 19:27:30.152602 kernel: Initialise system trusted keyrings Feb 9 19:27:30.152628 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:27:30.152649 kernel: Key type asymmetric registered Feb 9 19:27:30.152666 kernel: Asymmetric key parser 'x509' registered Feb 9 19:27:30.152683 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:27:30.152701 kernel: io scheduler mq-deadline registered Feb 9 19:27:30.152717 kernel: io scheduler kyber registered Feb 9 19:27:30.152734 kernel: io scheduler bfq registered Feb 9 19:27:30.152752 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:27:30.152784 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:27:30.152958 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 9 19:27:30.152986 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 19:27:30.153145 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 9 19:27:30.153168 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:27:30.153329 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 9 19:27:30.153353 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:27:30.153371 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:27:30.153388 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:27:30.153405 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 9 19:27:30.153421 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 9 19:27:30.153590 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 9 19:27:30.153624 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:27:30.153641 kernel: i8042: Warning: Keylock active Feb 9 19:27:30.153658 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:27:30.153675 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:27:30.160606 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 9 19:27:30.160820 kernel: rtc_cmos 00:00: registered as rtc0 Feb 9 19:27:30.160981 kernel: rtc_cmos 00:00: setting system clock to 2024-02-09T19:27:29 UTC (1707506849) Feb 9 19:27:30.161131 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 9 19:27:30.161155 kernel: intel_pstate: CPU model not supported Feb 9 19:27:30.161174 kernel: pstore: Registered efi as persistent store backend Feb 9 19:27:30.161193 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:27:30.161211 kernel: Segment Routing with IPv6 Feb 9 19:27:30.161229 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:27:30.161246 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:27:30.161264 kernel: Key type dns_resolver registered Feb 9 19:27:30.161303 kernel: IPI shorthand broadcast: enabled Feb 9 19:27:30.161321 kernel: sched_clock: Marking stable (773273923, 205799445)->(1058011810, -78938442) Feb 9 19:27:30.161340 kernel: registered taskstats version 1 Feb 9 19:27:30.161358 kernel: Loading compiled-in X.509 certificates Feb 9 19:27:30.161375 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:27:30.161394 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:27:30.161412 kernel: Key type .fscrypt registered Feb 9 19:27:30.161430 kernel: Key type fscrypt-provisioning registered Feb 9 19:27:30.161448 kernel: pstore: Using crash dump compression: deflate Feb 9 19:27:30.161470 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:27:30.161487 kernel: ima: No architecture policies found Feb 9 19:27:30.161504 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:27:30.161521 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:27:30.161539 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:27:30.161557 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:27:30.161575 kernel: Run /init as init process Feb 9 19:27:30.161592 kernel: with arguments: Feb 9 19:27:30.161619 kernel: /init Feb 9 19:27:30.161637 kernel: with environment: Feb 9 19:27:30.161654 kernel: HOME=/ Feb 9 19:27:30.161671 kernel: TERM=linux Feb 9 19:27:30.161689 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:27:30.161712 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:27:30.161734 systemd[1]: Detected virtualization kvm. Feb 9 19:27:30.161757 systemd[1]: Detected architecture x86-64. Feb 9 19:27:30.161803 systemd[1]: Running in initrd. Feb 9 19:27:30.162034 systemd[1]: No hostname configured, using default hostname. Feb 9 19:27:30.162053 systemd[1]: Hostname set to . Feb 9 19:27:30.162073 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:27:30.162091 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:27:30.162107 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:27:30.162257 systemd[1]: Reached target cryptsetup.target. Feb 9 19:27:30.162277 systemd[1]: Reached target paths.target. Feb 9 19:27:30.162303 systemd[1]: Reached target slices.target. Feb 9 19:27:30.162322 systemd[1]: Reached target swap.target. Feb 9 19:27:30.162340 systemd[1]: Reached target timers.target. Feb 9 19:27:30.162358 systemd[1]: Listening on iscsid.socket. Feb 9 19:27:30.162519 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:27:30.162538 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:27:30.162558 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:27:30.162582 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:27:30.162755 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:27:30.162919 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:27:30.162940 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:27:30.162959 systemd[1]: Reached target sockets.target. Feb 9 19:27:30.162978 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:27:30.162997 systemd[1]: Finished network-cleanup.service. Feb 9 19:27:30.163015 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:27:30.163034 systemd[1]: Starting systemd-journald.service... Feb 9 19:27:30.163058 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:27:30.163076 kernel: audit: type=1334 audit(1707506850.134:2): prog-id=6 op=LOAD Feb 9 19:27:30.163096 systemd[1]: Starting systemd-resolved.service... Feb 9 19:27:30.163133 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:27:30.163156 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:27:30.163176 kernel: audit: type=1130 audit(1707506850.152:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.163195 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:27:30.163225 systemd-journald[190]: Journal started Feb 9 19:27:30.163323 systemd-journald[190]: Runtime Journal (/run/log/journal/1a608b5105a9c43b994666ff33f9bb4e) is 8.0M, max 148.8M, 140.8M free. Feb 9 19:27:30.134000 audit: BPF prog-id=6 op=LOAD Feb 9 19:27:30.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.178182 systemd[1]: Started systemd-journald.service. Feb 9 19:27:30.178261 kernel: audit: type=1130 audit(1707506850.169:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.179341 systemd-modules-load[191]: Inserted module 'overlay' Feb 9 19:27:30.195234 kernel: audit: type=1130 audit(1707506850.181:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.183540 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:27:30.193346 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:27:30.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.201012 kernel: audit: type=1130 audit(1707506850.190:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.201910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:27:30.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.217936 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:27:30.223884 kernel: audit: type=1130 audit(1707506850.216:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.234280 systemd-resolved[192]: Positive Trust Anchors: Feb 9 19:27:30.235148 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:27:30.235380 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:27:30.240288 systemd-resolved[192]: Defaulting to hostname 'linux'. Feb 9 19:27:30.242884 systemd[1]: Started systemd-resolved.service. Feb 9 19:27:30.243145 systemd[1]: Reached target nss-lookup.target. Feb 9 19:27:30.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.249823 kernel: audit: type=1130 audit(1707506850.241:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.252232 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:27:30.268936 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:27:30.268977 kernel: Bridge firewalling registered Feb 9 19:27:30.269003 kernel: audit: type=1130 audit(1707506850.258:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.259954 systemd-modules-load[191]: Inserted module 'br_netfilter' Feb 9 19:27:30.261554 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:27:30.281426 dracut-cmdline[206]: dracut-dracut-053 Feb 9 19:27:30.286355 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:27:30.297920 kernel: SCSI subsystem initialized Feb 9 19:27:30.315087 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:27:30.315179 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:27:30.317336 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:27:30.322230 systemd-modules-load[191]: Inserted module 'dm_multipath' Feb 9 19:27:30.332790 kernel: audit: type=1130 audit(1707506850.325:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.323353 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:27:30.331665 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:27:30.347931 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:27:30.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.389796 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:27:30.407816 kernel: iscsi: registered transport (tcp) Feb 9 19:27:30.438824 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:27:30.438918 kernel: QLogic iSCSI HBA Driver Feb 9 19:27:30.484097 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:27:30.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.494287 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:27:30.556844 kernel: raid6: avx2x4 gen() 18477 MB/s Feb 9 19:27:30.577815 kernel: raid6: avx2x4 xor() 7694 MB/s Feb 9 19:27:30.598818 kernel: raid6: avx2x2 gen() 18344 MB/s Feb 9 19:27:30.619812 kernel: raid6: avx2x2 xor() 18639 MB/s Feb 9 19:27:30.640809 kernel: raid6: avx2x1 gen() 13888 MB/s Feb 9 19:27:30.661811 kernel: raid6: avx2x1 xor() 16049 MB/s Feb 9 19:27:30.682812 kernel: raid6: sse2x4 gen() 10971 MB/s Feb 9 19:27:30.703805 kernel: raid6: sse2x4 xor() 6711 MB/s Feb 9 19:27:30.724808 kernel: raid6: sse2x2 gen() 12048 MB/s Feb 9 19:27:30.745819 kernel: raid6: sse2x2 xor() 7393 MB/s Feb 9 19:27:30.766812 kernel: raid6: sse2x1 gen() 10488 MB/s Feb 9 19:27:30.792858 kernel: raid6: sse2x1 xor() 5173 MB/s Feb 9 19:27:30.792957 kernel: raid6: using algorithm avx2x4 gen() 18477 MB/s Feb 9 19:27:30.792980 kernel: raid6: .... xor() 7694 MB/s, rmw enabled Feb 9 19:27:30.797925 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:27:30.823815 kernel: xor: automatically using best checksumming function avx Feb 9 19:27:30.936795 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:27:30.949955 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:27:30.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.956000 audit: BPF prog-id=7 op=LOAD Feb 9 19:27:30.956000 audit: BPF prog-id=8 op=LOAD Feb 9 19:27:30.959553 systemd[1]: Starting systemd-udevd.service... Feb 9 19:27:30.976883 systemd-udevd[388]: Using default interface naming scheme 'v252'. Feb 9 19:27:30.984038 systemd[1]: Started systemd-udevd.service. Feb 9 19:27:30.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:30.998204 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:27:31.013702 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Feb 9 19:27:31.054041 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:27:31.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:31.055366 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:27:31.122686 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:27:31.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:31.213792 kernel: scsi host0: Virtio SCSI HBA Feb 9 19:27:31.262568 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:27:31.262665 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 9 19:27:31.336697 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:27:31.336793 kernel: AES CTR mode by8 optimization enabled Feb 9 19:27:31.360235 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 9 19:27:31.360618 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 9 19:27:31.375944 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 9 19:27:31.376307 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 9 19:27:31.376557 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 19:27:31.395811 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:27:31.395887 kernel: GPT:17805311 != 25165823 Feb 9 19:27:31.395910 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:27:31.401072 kernel: GPT:17805311 != 25165823 Feb 9 19:27:31.405118 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:27:31.415497 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:27:31.421792 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 9 19:27:31.482031 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:27:31.493951 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (436) Feb 9 19:27:31.508516 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:27:31.517173 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:27:31.525202 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:27:31.551392 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:27:31.564254 systemd[1]: Starting disk-uuid.service... Feb 9 19:27:31.586179 disk-uuid[515]: Primary Header is updated. Feb 9 19:27:31.586179 disk-uuid[515]: Secondary Entries is updated. Feb 9 19:27:31.586179 disk-uuid[515]: Secondary Header is updated. Feb 9 19:27:31.625946 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:27:31.625999 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:27:31.649820 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:27:32.638925 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:27:32.639016 disk-uuid[516]: The operation has completed successfully. Feb 9 19:27:32.708315 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:27:32.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:32.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:32.708468 systemd[1]: Finished disk-uuid.service. Feb 9 19:27:32.727192 systemd[1]: Starting verity-setup.service... Feb 9 19:27:32.755790 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:27:32.832211 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:27:32.835341 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:27:32.858416 systemd[1]: Finished verity-setup.service. Feb 9 19:27:32.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:32.938801 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:27:32.939556 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:27:32.948201 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:27:32.987019 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:27:32.987054 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:27:32.987069 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:27:32.949293 systemd[1]: Starting ignition-setup.service... Feb 9 19:27:33.007935 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:27:32.995133 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:27:33.037490 systemd[1]: Finished ignition-setup.service. Feb 9 19:27:33.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.039357 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:27:33.111117 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:27:33.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.124000 audit: BPF prog-id=9 op=LOAD Feb 9 19:27:33.127485 systemd[1]: Starting systemd-networkd.service... Feb 9 19:27:33.165404 systemd-networkd[690]: lo: Link UP Feb 9 19:27:33.165423 systemd-networkd[690]: lo: Gained carrier Feb 9 19:27:33.166348 systemd-networkd[690]: Enumeration completed Feb 9 19:27:33.166731 systemd-networkd[690]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:27:33.167001 systemd[1]: Started systemd-networkd.service. Feb 9 19:27:33.169333 systemd-networkd[690]: eth0: Link UP Feb 9 19:27:33.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.169341 systemd-networkd[690]: eth0: Gained carrier Feb 9 19:27:33.178948 systemd-networkd[690]: eth0: DHCPv4 address 10.128.0.112/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 9 19:27:33.211198 systemd[1]: Reached target network.target. Feb 9 19:27:33.227528 systemd[1]: Starting iscsiuio.service... Feb 9 19:27:33.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.241254 systemd[1]: Started iscsiuio.service. Feb 9 19:27:33.284109 iscsid[700]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:27:33.284109 iscsid[700]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 19:27:33.284109 iscsid[700]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:27:33.284109 iscsid[700]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:27:33.284109 iscsid[700]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:27:33.284109 iscsid[700]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:27:33.284109 iscsid[700]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:27:33.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.270807 systemd[1]: Starting iscsid.service... Feb 9 19:27:33.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.360574 ignition[616]: Ignition 2.14.0 Feb 9 19:27:33.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.292115 systemd[1]: Started iscsid.service. Feb 9 19:27:33.360589 ignition[616]: Stage: fetch-offline Feb 9 19:27:33.312378 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:27:33.360671 ignition[616]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:27:33.367302 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:27:33.360712 ignition[616]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:27:33.376122 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:27:33.382544 ignition[616]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:27:33.386050 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:27:33.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.382752 ignition[616]: parsed url from cmdline: "" Feb 9 19:27:33.394085 systemd[1]: Reached target remote-fs.target. Feb 9 19:27:33.382757 ignition[616]: no config URL provided Feb 9 19:27:33.412120 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:27:33.382781 ignition[616]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:27:33.429428 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:27:33.382796 ignition[616]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:27:33.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.455450 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:27:33.382808 ignition[616]: failed to fetch config: resource requires networking Feb 9 19:27:33.472330 systemd[1]: Starting ignition-fetch.service... Feb 9 19:27:33.383232 ignition[616]: Ignition finished successfully Feb 9 19:27:33.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.521274 unknown[715]: fetched base config from "system" Feb 9 19:27:33.484368 ignition[715]: Ignition 2.14.0 Feb 9 19:27:33.521287 unknown[715]: fetched base config from "system" Feb 9 19:27:33.484376 ignition[715]: Stage: fetch Feb 9 19:27:33.521296 unknown[715]: fetched user config from "gcp" Feb 9 19:27:33.484509 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:27:33.534418 systemd[1]: Finished ignition-fetch.service. Feb 9 19:27:33.484543 ignition[715]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:27:33.553388 systemd[1]: Starting ignition-kargs.service... Feb 9 19:27:33.493041 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:27:33.589383 systemd[1]: Finished ignition-kargs.service. Feb 9 19:27:33.493236 ignition[715]: parsed url from cmdline: "" Feb 9 19:27:33.614319 systemd[1]: Starting ignition-disks.service... Feb 9 19:27:33.493248 ignition[715]: no config URL provided Feb 9 19:27:33.645309 systemd[1]: Finished ignition-disks.service. Feb 9 19:27:33.493256 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:27:33.660323 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:27:33.493267 ignition[715]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:27:33.675954 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:27:33.493309 ignition[715]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 9 19:27:33.688953 systemd[1]: Reached target local-fs.target. Feb 9 19:27:33.499912 ignition[715]: GET result: OK Feb 9 19:27:33.700934 systemd[1]: Reached target sysinit.target. Feb 9 19:27:33.499994 ignition[715]: parsing config with SHA512: 3c7d7f0e290de7e85863e361bd1e8d4805aa15d72c6f32e315dd859193b8e04a712af9da754921b6da84ff41977d00ed5d63b3b560ea699c700f7fa67767f0c7 Feb 9 19:27:33.713978 systemd[1]: Reached target basic.target. Feb 9 19:27:33.522169 ignition[715]: fetch: fetch complete Feb 9 19:27:33.715376 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:27:33.522176 ignition[715]: fetch: fetch passed Feb 9 19:27:33.522229 ignition[715]: Ignition finished successfully Feb 9 19:27:33.567548 ignition[721]: Ignition 2.14.0 Feb 9 19:27:33.567559 ignition[721]: Stage: kargs Feb 9 19:27:33.567703 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:27:33.567734 ignition[721]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:27:33.575960 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:27:33.578993 ignition[721]: kargs: kargs passed Feb 9 19:27:33.579055 ignition[721]: Ignition finished successfully Feb 9 19:27:33.626483 ignition[727]: Ignition 2.14.0 Feb 9 19:27:33.626493 ignition[727]: Stage: disks Feb 9 19:27:33.626629 ignition[727]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:27:33.626660 ignition[727]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:27:33.634696 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:27:33.636113 ignition[727]: disks: disks passed Feb 9 19:27:33.636168 ignition[727]: Ignition finished successfully Feb 9 19:27:33.756408 systemd-fsck[735]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:27:33.941836 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:27:33.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:33.943229 systemd[1]: Mounting sysroot.mount... Feb 9 19:27:33.980098 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:27:33.974178 systemd[1]: Mounted sysroot.mount. Feb 9 19:27:33.987272 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:27:34.009670 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:27:34.028728 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:27:34.028877 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:27:34.028931 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:27:34.050665 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:27:34.084190 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:27:34.112196 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (741) Feb 9 19:27:34.110461 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:27:34.148717 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:27:34.148750 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:27:34.148789 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:27:34.148812 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:27:34.148973 initrd-setup-root[746]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:27:34.158912 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:27:34.157481 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:27:34.185058 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:27:34.196046 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:27:34.244235 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:27:34.284040 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 9 19:27:34.284096 kernel: audit: type=1130 audit(1707506854.242:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:34.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:34.245983 systemd[1]: Starting ignition-mount.service... Feb 9 19:27:34.273315 systemd-networkd[690]: eth0: Gained IPv6LL Feb 9 19:27:34.292710 systemd[1]: Starting sysroot-boot.service... Feb 9 19:27:34.306281 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:27:34.329923 ignition[806]: INFO : Ignition 2.14.0 Feb 9 19:27:34.329923 ignition[806]: INFO : Stage: mount Feb 9 19:27:34.329923 ignition[806]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:27:34.329923 ignition[806]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:27:34.430989 kernel: audit: type=1130 audit(1707506854.352:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:34.431037 kernel: audit: type=1130 audit(1707506854.384:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:34.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:34.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:34.306552 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:27:34.448095 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:27:34.448095 ignition[806]: INFO : mount: mount passed Feb 9 19:27:34.448095 ignition[806]: INFO : Ignition finished successfully Feb 9 19:27:34.512957 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (816) Feb 9 19:27:34.513027 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:27:34.513052 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:27:34.513072 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:27:34.513093 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:27:34.339028 systemd[1]: Finished ignition-mount.service. Feb 9 19:27:34.356582 systemd[1]: Finished sysroot-boot.service. Feb 9 19:27:34.387702 systemd[1]: Starting ignition-files.service... Feb 9 19:27:34.541977 ignition[835]: INFO : Ignition 2.14.0 Feb 9 19:27:34.541977 ignition[835]: INFO : Stage: files Feb 9 19:27:34.541977 ignition[835]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:27:34.541977 ignition[835]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:27:34.597947 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (837) Feb 9 19:27:34.442202 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:27:34.606957 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:27:34.606957 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:27:34.606957 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:27:34.606957 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:27:34.606957 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:27:34.606957 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:27:34.606957 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:27:34.606957 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Feb 9 19:27:34.606957 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:27:34.606957 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem799356564" Feb 9 19:27:34.606957 ignition[835]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem799356564": device or resource busy Feb 9 19:27:34.606957 ignition[835]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem799356564", trying btrfs: device or resource busy Feb 9 19:27:34.606957 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem799356564" Feb 9 19:27:34.606957 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem799356564" Feb 9 19:27:34.606957 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem799356564" Feb 9 19:27:34.606957 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem799356564" Feb 9 19:27:34.606957 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Feb 9 19:27:34.606957 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:27:34.507842 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:27:34.877953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:27:34.566917 unknown[835]: wrote ssh authorized keys file for user: core Feb 9 19:27:34.907919 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:27:35.131515 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:27:35.155999 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:27:35.155999 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:27:35.155999 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:27:35.335826 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:27:35.450607 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:27:35.473926 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:27:35.473926 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Feb 9 19:27:35.473926 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:27:35.473926 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2459614177" Feb 9 19:27:35.473926 ignition[835]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2459614177": device or resource busy Feb 9 19:27:35.473926 ignition[835]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2459614177", trying btrfs: device or resource busy Feb 9 19:27:35.473926 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2459614177" Feb 9 19:27:35.473926 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2459614177" Feb 9 19:27:35.473926 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem2459614177" Feb 9 19:27:35.473926 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem2459614177" Feb 9 19:27:35.473926 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Feb 9 19:27:35.473926 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:27:35.473926 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(d): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:27:35.464500 systemd[1]: mnt-oem2459614177.mount: Deactivated successfully. Feb 9 19:27:35.701072 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(d): GET result: OK Feb 9 19:27:35.862869 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(d): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:27:35.886949 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:27:35.886949 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:27:35.886949 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:27:35.886949 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Feb 9 19:27:36.478238 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(e): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:27:36.502953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:27:36.502953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:27:36.502953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:27:36.502953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:27:36.502953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:27:36.502953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:27:36.502953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:27:36.502953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Feb 9 19:27:36.502953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(12): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:27:36.502953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem609481692" Feb 9 19:27:36.502953 ignition[835]: CRITICAL : files: createFilesystemsFiles: createFiles: op(12): op(13): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem609481692": device or resource busy Feb 9 19:27:36.502953 ignition[835]: ERROR : files: createFilesystemsFiles: createFiles: op(12): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem609481692", trying btrfs: device or resource busy Feb 9 19:27:36.502953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem609481692" Feb 9 19:27:36.898998 kernel: audit: type=1130 audit(1707506856.530:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.899054 kernel: audit: type=1130 audit(1707506856.625:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.899081 kernel: audit: type=1130 audit(1707506856.683:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.899117 kernel: audit: type=1131 audit(1707506856.683:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.899140 kernel: audit: type=1130 audit(1707506856.806:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.899155 kernel: audit: type=1131 audit(1707506856.806:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem609481692" Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [started] unmounting "/mnt/oem609481692" Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [finished] unmounting "/mnt/oem609481692" Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(16): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(16): op(17): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171245448" Feb 9 19:27:36.899384 ignition[835]: CRITICAL : files: createFilesystemsFiles: createFiles: op(16): op(17): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171245448": device or resource busy Feb 9 19:27:36.899384 ignition[835]: ERROR : files: createFilesystemsFiles: createFiles: op(16): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2171245448", trying btrfs: device or resource busy Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(16): op(18): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171245448" Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(16): op(18): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2171245448" Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(16): op(19): [started] unmounting "/mnt/oem2171245448" Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(16): op(19): [finished] unmounting "/mnt/oem2171245448" Feb 9 19:27:36.899384 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Feb 9 19:27:36.899384 ignition[835]: INFO : files: op(1a): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:27:37.212133 kernel: audit: type=1130 audit(1707506856.964:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.508381 systemd[1]: mnt-oem609481692.mount: Deactivated successfully. Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1a): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1b): [started] processing unit "oem-gce.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1b): [finished] processing unit "oem-gce.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1c): [started] processing unit "oem-gce-enable-oslogin.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1c): [finished] processing unit "oem-gce-enable-oslogin.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1d): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1d): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1f): [started] processing unit "prepare-critools.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1f): op(20): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(1f): [finished] processing unit "prepare-critools.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(21): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(21): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(22): [started] setting preset to enabled for "oem-gce.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(22): [finished] setting preset to enabled for "oem-gce.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Feb 9 19:27:37.237996 ignition[835]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:27:37.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.530906 systemd[1]: Finished ignition-files.service. Feb 9 19:27:37.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.606052 ignition[835]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:27:37.606052 ignition[835]: INFO : files: op(25): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:27:37.606052 ignition[835]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:27:37.606052 ignition[835]: INFO : files: createResultFile: createFiles: op(26): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:27:37.606052 ignition[835]: INFO : files: createResultFile: createFiles: op(26): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:27:37.606052 ignition[835]: INFO : files: files passed Feb 9 19:27:37.606052 ignition[835]: INFO : Ignition finished successfully Feb 9 19:27:37.700083 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:27:37.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.542499 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:27:37.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.747096 iscsid[700]: iscsid shutting down. Feb 9 19:27:37.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.573180 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:27:37.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.574361 systemd[1]: Starting ignition-quench.service... Feb 9 19:27:37.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.602338 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:27:37.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.627478 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:27:37.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.627617 systemd[1]: Finished ignition-quench.service. Feb 9 19:27:37.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.685320 systemd[1]: Reached target ignition-complete.target. Feb 9 19:27:37.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.864186 ignition[873]: INFO : Ignition 2.14.0 Feb 9 19:27:37.864186 ignition[873]: INFO : Stage: umount Feb 9 19:27:37.864186 ignition[873]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:27:37.864186 ignition[873]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:27:37.864186 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:27:37.864186 ignition[873]: INFO : umount: umount passed Feb 9 19:27:37.864186 ignition[873]: INFO : Ignition finished successfully Feb 9 19:27:37.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.767662 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:27:37.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.798028 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:27:38.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:38.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:36.798155 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:27:36.808442 systemd[1]: Reached target initrd-fs.target. Feb 9 19:27:36.876191 systemd[1]: Reached target initrd.target. Feb 9 19:27:36.906185 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:27:36.907560 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:27:36.930431 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:27:36.967631 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:27:38.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.035262 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:27:38.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:38.113000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:27:37.066300 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:27:37.112340 systemd[1]: Stopped target timers.target. Feb 9 19:27:37.151257 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:27:38.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.151458 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:27:38.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.167535 systemd[1]: Stopped target initrd.target. Feb 9 19:27:38.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.188298 systemd[1]: Stopped target basic.target. Feb 9 19:27:37.230279 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:27:38.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.258300 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:27:37.269335 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:27:37.306286 systemd[1]: Stopped target remote-fs.target. Feb 9 19:27:38.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.337303 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:27:38.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.356355 systemd[1]: Stopped target sysinit.target. Feb 9 19:27:38.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.398410 systemd[1]: Stopped target local-fs.target. Feb 9 19:27:37.409309 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:27:37.455264 systemd[1]: Stopped target swap.target. Feb 9 19:27:38.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.466371 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:27:38.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.466572 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:27:38.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.486527 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:27:38.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.524192 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:27:38.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:38.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:37.524432 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:27:37.537500 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:27:37.537678 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:27:37.578368 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:27:37.578549 systemd[1]: Stopped ignition-files.service. Feb 9 19:27:38.474000 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). Feb 9 19:27:37.601102 systemd[1]: Stopping ignition-mount.service... Feb 9 19:27:37.632537 systemd[1]: Stopping iscsid.service... Feb 9 19:27:37.671693 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:27:37.687145 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:27:37.687475 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:27:37.715401 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:27:37.715589 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:27:37.741568 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:27:37.742494 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:27:37.742619 systemd[1]: Stopped iscsid.service. Feb 9 19:27:37.754716 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:27:37.754882 systemd[1]: Stopped ignition-mount.service. Feb 9 19:27:37.769916 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:27:37.770033 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:27:37.793893 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:27:37.794038 systemd[1]: Stopped ignition-disks.service. Feb 9 19:27:37.808180 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:27:37.808258 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:27:37.824091 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:27:37.824176 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:27:37.841102 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:27:37.841187 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:27:37.856059 systemd[1]: Stopped target paths.target. Feb 9 19:27:37.870973 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:27:37.872896 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:27:37.885992 systemd[1]: Stopped target slices.target. Feb 9 19:27:37.898969 systemd[1]: Stopped target sockets.target. Feb 9 19:27:37.899134 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:27:37.899194 systemd[1]: Closed iscsid.socket. Feb 9 19:27:37.916231 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:27:37.916303 systemd[1]: Stopped ignition-setup.service. Feb 9 19:27:37.940275 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:27:37.940355 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:27:37.957371 systemd[1]: Stopping iscsiuio.service... Feb 9 19:27:37.978527 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:27:37.978646 systemd[1]: Stopped iscsiuio.service. Feb 9 19:27:37.993448 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:27:37.993556 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:27:38.010111 systemd[1]: Stopped target network.target. Feb 9 19:27:38.024983 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:27:38.025065 systemd[1]: Closed iscsiuio.socket. Feb 9 19:27:38.039369 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:27:38.043882 systemd-networkd[690]: eth0: DHCPv6 lease lost Feb 9 19:27:38.060283 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:27:38.085913 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:27:38.086056 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:27:38.100701 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:27:38.100877 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:27:38.115609 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:27:38.115652 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:27:38.131097 systemd[1]: Stopping network-cleanup.service... Feb 9 19:27:38.144945 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:27:38.145070 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:27:38.160191 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:27:38.160266 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:27:38.177258 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:27:38.177324 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:27:38.192281 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:27:38.207747 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:27:38.208526 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:27:38.208680 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:27:38.223569 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:27:38.223664 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:27:38.238233 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:27:38.238305 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:27:38.254151 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:27:38.254241 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:27:38.270184 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:27:38.270266 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:27:38.285171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:27:38.285244 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:27:38.302210 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:27:38.323044 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:27:38.323166 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:27:38.338216 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:27:38.338315 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:27:38.353166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:27:38.353242 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:27:38.370581 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:27:38.371249 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:27:38.371364 systemd[1]: Stopped network-cleanup.service. Feb 9 19:27:38.384429 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:27:38.384551 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:27:38.399355 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:27:38.416087 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:27:38.439235 systemd[1]: Switching root. Feb 9 19:27:38.484498 systemd-journald[190]: Journal stopped Feb 9 19:27:43.167870 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:27:43.168010 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:27:43.168043 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:27:43.168071 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:27:43.168099 kernel: SELinux: policy capability open_perms=1 Feb 9 19:27:43.168122 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:27:43.168149 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:27:43.168171 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:27:43.168204 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:27:43.168226 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:27:43.168247 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:27:43.168273 systemd[1]: Successfully loaded SELinux policy in 110.871ms. Feb 9 19:27:43.168322 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.135ms. Feb 9 19:27:43.168352 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:27:43.168378 systemd[1]: Detected virtualization kvm. Feb 9 19:27:43.168401 systemd[1]: Detected architecture x86-64. Feb 9 19:27:43.168497 systemd[1]: Detected first boot. Feb 9 19:27:43.168527 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:27:43.168552 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:27:43.168578 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:27:43.168604 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:27:43.168630 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:27:43.168657 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:27:43.168686 kernel: kauditd_printk_skb: 52 callbacks suppressed Feb 9 19:27:43.168711 kernel: audit: type=1334 audit(1707506862.274:90): prog-id=12 op=LOAD Feb 9 19:27:43.168734 kernel: audit: type=1334 audit(1707506862.274:91): prog-id=3 op=UNLOAD Feb 9 19:27:43.168756 kernel: audit: type=1334 audit(1707506862.286:92): prog-id=13 op=LOAD Feb 9 19:27:43.168822 kernel: audit: type=1334 audit(1707506862.294:93): prog-id=14 op=LOAD Feb 9 19:27:43.168851 kernel: audit: type=1334 audit(1707506862.294:94): prog-id=4 op=UNLOAD Feb 9 19:27:43.168874 kernel: audit: type=1334 audit(1707506862.294:95): prog-id=5 op=UNLOAD Feb 9 19:27:43.168896 kernel: audit: type=1334 audit(1707506862.308:96): prog-id=15 op=LOAD Feb 9 19:27:43.168919 kernel: audit: type=1334 audit(1707506862.308:97): prog-id=12 op=UNLOAD Feb 9 19:27:43.168947 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:27:43.168972 kernel: audit: type=1334 audit(1707506862.315:98): prog-id=16 op=LOAD Feb 9 19:27:43.169000 kernel: audit: type=1334 audit(1707506862.322:99): prog-id=17 op=LOAD Feb 9 19:27:43.169023 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:27:43.169049 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:27:43.169072 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:27:43.169096 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:27:43.169120 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:27:43.169151 systemd[1]: Created slice system-getty.slice. Feb 9 19:27:43.169175 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:27:43.169203 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:27:43.169227 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:27:43.169251 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:27:43.169276 systemd[1]: Created slice user.slice. Feb 9 19:27:43.169300 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:27:43.169323 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:27:43.169352 systemd[1]: Set up automount boot.automount. Feb 9 19:27:43.169375 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:27:43.169399 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:27:43.169422 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:27:43.169445 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:27:43.169469 systemd[1]: Reached target integritysetup.target. Feb 9 19:27:43.169492 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:27:43.169515 systemd[1]: Reached target remote-fs.target. Feb 9 19:27:43.169538 systemd[1]: Reached target slices.target. Feb 9 19:27:43.169561 systemd[1]: Reached target swap.target. Feb 9 19:27:43.169588 systemd[1]: Reached target torcx.target. Feb 9 19:27:43.169612 systemd[1]: Reached target veritysetup.target. Feb 9 19:27:43.169636 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:27:43.169662 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:27:43.169686 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:27:43.169710 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:27:43.169734 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:27:43.169758 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:27:43.172826 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:27:43.172866 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:27:43.172892 systemd[1]: Mounting media.mount... Feb 9 19:27:43.172916 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:27:43.172940 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:27:43.172965 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:27:43.172990 systemd[1]: Mounting tmp.mount... Feb 9 19:27:43.173015 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:27:43.173038 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:27:43.173062 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:27:43.173089 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:27:43.173113 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:27:43.173138 systemd[1]: Starting modprobe@drm.service... Feb 9 19:27:43.173161 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:27:43.173185 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:27:43.173208 systemd[1]: Starting modprobe@loop.service... Feb 9 19:27:43.173231 kernel: fuse: init (API version 7.34) Feb 9 19:27:43.173256 kernel: loop: module loaded Feb 9 19:27:43.173280 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:27:43.173316 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:27:43.173340 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:27:43.173364 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:27:43.173388 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:27:43.173411 systemd[1]: Stopped systemd-journald.service. Feb 9 19:27:43.173434 systemd[1]: Starting systemd-journald.service... Feb 9 19:27:43.173458 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:27:43.173481 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:27:43.173508 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:27:43.173545 systemd-journald[997]: Journal started Feb 9 19:27:43.173666 systemd-journald[997]: Runtime Journal (/run/log/journal/1a608b5105a9c43b994666ff33f9bb4e) is 8.0M, max 148.8M, 140.8M free. Feb 9 19:27:38.483000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:27:38.771000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:27:38.922000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:27:38.922000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:27:38.922000 audit: BPF prog-id=10 op=LOAD Feb 9 19:27:38.922000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:27:38.922000 audit: BPF prog-id=11 op=LOAD Feb 9 19:27:38.922000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:27:39.119000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:27:39.119000 audit[906]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:39.119000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:27:39.129000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:27:39.129000 audit[906]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859b9 a2=1ed a3=0 items=2 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:39.129000 audit: CWD cwd="/" Feb 9 19:27:39.129000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:39.129000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:39.129000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:27:42.274000 audit: BPF prog-id=12 op=LOAD Feb 9 19:27:42.274000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:27:42.286000 audit: BPF prog-id=13 op=LOAD Feb 9 19:27:42.294000 audit: BPF prog-id=14 op=LOAD Feb 9 19:27:42.294000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:27:42.294000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:27:42.308000 audit: BPF prog-id=15 op=LOAD Feb 9 19:27:42.308000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:27:42.315000 audit: BPF prog-id=16 op=LOAD Feb 9 19:27:42.322000 audit: BPF prog-id=17 op=LOAD Feb 9 19:27:42.322000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:27:42.322000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:27:42.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:42.364000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:27:42.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:42.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.121000 audit: BPF prog-id=18 op=LOAD Feb 9 19:27:43.121000 audit: BPF prog-id=19 op=LOAD Feb 9 19:27:43.121000 audit: BPF prog-id=20 op=LOAD Feb 9 19:27:43.121000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:27:43.121000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:27:43.163000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:27:43.163000 audit[997]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffdea518280 a2=4000 a3=7ffdea51831c items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:43.163000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:27:42.273542 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:27:39.114754 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:27:42.331113 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:27:39.116150 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:27:39.116178 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:27:39.116253 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:27:39.116267 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:27:39.116323 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:27:39.116344 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:27:39.116594 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:27:39.116654 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:27:39.116679 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:27:39.120133 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:27:39.120189 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:27:39.120239 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:27:39.120259 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:27:39.120281 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:27:39.120300 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:27:41.640895 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:41Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:27:41.641224 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:41Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:27:41.641373 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:41Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:27:41.641598 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:41Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:27:41.641657 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:41Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:27:41.641729 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-02-09T19:27:41Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:27:43.194985 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:27:43.213169 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:27:43.213287 systemd[1]: Stopped verity-setup.service. Feb 9 19:27:43.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.234224 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:27:43.243867 systemd[1]: Started systemd-journald.service. Feb 9 19:27:43.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.254369 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:27:43.263169 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:27:43.271287 systemd[1]: Mounted media.mount. Feb 9 19:27:43.279172 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:27:43.285848 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:27:43.294227 systemd[1]: Mounted tmp.mount. Feb 9 19:27:43.302338 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:27:43.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.311421 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:27:43.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.320387 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:27:43.320609 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:27:43.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.334420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:27:43.334631 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:27:43.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.343427 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:27:43.343637 systemd[1]: Finished modprobe@drm.service. Feb 9 19:27:43.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.352422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:27:43.352637 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:27:43.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.361427 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:27:43.361648 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:27:43.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.370404 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:27:43.370623 systemd[1]: Finished modprobe@loop.service. Feb 9 19:27:43.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.379429 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:27:43.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.388365 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:27:43.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.397377 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:27:43.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.406430 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:27:43.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.415701 systemd[1]: Reached target network-pre.target. Feb 9 19:27:43.425568 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:27:43.435518 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:27:43.442988 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:27:43.446057 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:27:43.455043 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:27:43.463995 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:27:43.465915 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:27:43.472994 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:27:43.474978 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:27:43.478940 systemd-journald[997]: Time spent on flushing to /var/log/journal/1a608b5105a9c43b994666ff33f9bb4e is 74.625ms for 1166 entries. Feb 9 19:27:43.478940 systemd-journald[997]: System Journal (/var/log/journal/1a608b5105a9c43b994666ff33f9bb4e) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:27:43.593211 systemd-journald[997]: Received client request to flush runtime journal. Feb 9 19:27:43.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.492118 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:27:43.501900 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:27:43.512363 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:27:43.595124 udevadm[1011]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:27:43.522747 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:27:43.531370 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:27:43.540403 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:27:43.553136 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:27:43.562562 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:27:43.573510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:27:43.594614 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:27:43.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:43.638683 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:27:43.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:44.229919 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:27:44.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:44.237000 audit: BPF prog-id=21 op=LOAD Feb 9 19:27:44.237000 audit: BPF prog-id=22 op=LOAD Feb 9 19:27:44.237000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:27:44.237000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:27:44.239874 systemd[1]: Starting systemd-udevd.service... Feb 9 19:27:44.263904 systemd-udevd[1016]: Using default interface naming scheme 'v252'. Feb 9 19:27:44.315524 systemd[1]: Started systemd-udevd.service. Feb 9 19:27:44.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:44.325000 audit: BPF prog-id=23 op=LOAD Feb 9 19:27:44.328195 systemd[1]: Starting systemd-networkd.service... Feb 9 19:27:44.342000 audit: BPF prog-id=24 op=LOAD Feb 9 19:27:44.342000 audit: BPF prog-id=25 op=LOAD Feb 9 19:27:44.342000 audit: BPF prog-id=26 op=LOAD Feb 9 19:27:44.345485 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:27:44.402299 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:27:44.423866 systemd[1]: Started systemd-userdbd.service. Feb 9 19:27:44.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:44.527810 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:27:44.562387 systemd-networkd[1026]: lo: Link UP Feb 9 19:27:44.562401 systemd-networkd[1026]: lo: Gained carrier Feb 9 19:27:44.563280 systemd-networkd[1026]: Enumeration completed Feb 9 19:27:44.563462 systemd[1]: Started systemd-networkd.service. Feb 9 19:27:44.563500 systemd-networkd[1026]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:27:44.565991 systemd-networkd[1026]: eth0: Link UP Feb 9 19:27:44.566008 systemd-networkd[1026]: eth0: Gained carrier Feb 9 19:27:44.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:44.583798 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1028) Feb 9 19:27:44.584996 systemd-networkd[1026]: eth0: DHCPv4 address 10.128.0.112/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 9 19:27:44.563000 audit[1022]: AVC avc: denied { confidentiality } for pid=1022 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:27:44.563000 audit[1022]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55cbff7ccf70 a1=32194 a2=7fcdec663bc5 a3=5 items=108 ppid=1016 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:44.563000 audit: CWD cwd="/" Feb 9 19:27:44.563000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=1 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=2 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=3 name=(null) inode=14456 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=4 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=5 name=(null) inode=14457 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=6 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=7 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=8 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=9 name=(null) inode=14459 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=10 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=11 name=(null) inode=14460 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=12 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=13 name=(null) inode=14461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=14 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=15 name=(null) inode=14462 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=16 name=(null) inode=14458 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=17 name=(null) inode=14463 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=18 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=19 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=20 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=21 name=(null) inode=14465 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=22 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=23 name=(null) inode=14466 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=24 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=25 name=(null) inode=14467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=26 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=27 name=(null) inode=14468 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=28 name=(null) inode=14464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=29 name=(null) inode=14469 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=30 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=31 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=32 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=33 name=(null) inode=14471 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=34 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=35 name=(null) inode=14472 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=36 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=37 name=(null) inode=14473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=38 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=39 name=(null) inode=14474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=40 name=(null) inode=14470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=41 name=(null) inode=14475 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=42 name=(null) inode=14455 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=43 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=44 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=45 name=(null) inode=14477 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=46 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=47 name=(null) inode=14478 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=48 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=49 name=(null) inode=14479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=50 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=51 name=(null) inode=14480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=52 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=53 name=(null) inode=14481 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=55 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=56 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=57 name=(null) inode=14483 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=58 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=59 name=(null) inode=14484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=60 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=61 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=62 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=63 name=(null) inode=14486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=64 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=65 name=(null) inode=14487 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=66 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=67 name=(null) inode=14488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=68 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=69 name=(null) inode=14489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=70 name=(null) inode=14485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=71 name=(null) inode=14490 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=72 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=73 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=74 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=75 name=(null) inode=14492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=76 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=77 name=(null) inode=14493 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=78 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=79 name=(null) inode=14494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=80 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=81 name=(null) inode=14495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=82 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=83 name=(null) inode=14496 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=84 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=85 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=86 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=87 name=(null) inode=14498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=88 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.637921 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:27:44.563000 audit: PATH item=89 name=(null) inode=14499 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=90 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=91 name=(null) inode=14500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=92 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=93 name=(null) inode=14501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=94 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=95 name=(null) inode=14502 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=96 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=97 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=98 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=99 name=(null) inode=14504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=100 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=101 name=(null) inode=14505 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=102 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=103 name=(null) inode=14506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=104 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=105 name=(null) inode=14507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=106 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PATH item=107 name=(null) inode=14508 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:27:44.563000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:27:44.672807 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 9 19:27:44.692810 kernel: EDAC MC: Ver: 3.0.0 Feb 9 19:27:44.692932 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:27:44.692966 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 9 19:27:44.715805 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:27:44.735816 kernel: ACPI: button: Sleep Button [SLPF] Feb 9 19:27:44.739545 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:27:44.757384 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:27:44.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:44.767682 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:27:44.799696 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:27:44.831262 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:27:44.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:44.841155 systemd[1]: Reached target cryptsetup.target. Feb 9 19:27:44.851476 systemd[1]: Starting lvm2-activation.service... Feb 9 19:27:44.857702 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:27:44.884138 systemd[1]: Finished lvm2-activation.service. Feb 9 19:27:44.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:44.893117 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:27:44.901950 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:27:44.902005 systemd[1]: Reached target local-fs.target. Feb 9 19:27:44.910944 systemd[1]: Reached target machines.target. Feb 9 19:27:44.920479 systemd[1]: Starting ldconfig.service... Feb 9 19:27:44.928821 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:27:44.928900 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:27:44.930641 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:27:44.939932 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:27:44.951788 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:27:44.952154 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:27:44.952267 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:27:44.954294 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:27:44.955248 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1056 (bootctl) Feb 9 19:27:44.957654 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:27:44.988029 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:27:44.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:44.994636 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:27:44.997826 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:27:45.000322 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:27:45.123877 systemd-fsck[1065]: fsck.fat 4.2 (2021-01-31) Feb 9 19:27:45.123877 systemd-fsck[1065]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:27:45.125885 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:27:45.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:45.138333 systemd[1]: Mounting boot.mount... Feb 9 19:27:45.177741 systemd[1]: Mounted boot.mount. Feb 9 19:27:45.206118 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:27:45.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:45.310354 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:27:45.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:45.320856 systemd[1]: Starting audit-rules.service... Feb 9 19:27:45.329686 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:27:45.339760 systemd[1]: Starting oem-gce-enable-oslogin.service... Feb 9 19:27:45.349730 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:27:45.358000 audit: BPF prog-id=27 op=LOAD Feb 9 19:27:45.361542 systemd[1]: Starting systemd-resolved.service... Feb 9 19:27:45.368000 audit: BPF prog-id=28 op=LOAD Feb 9 19:27:45.371749 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:27:45.380663 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:27:45.416221 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:27:45.422000 audit[1076]: SYSTEM_BOOT pid=1076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:27:45.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:45.433343 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:27:45.438485 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:27:45.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:45.462002 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Feb 9 19:27:45.462245 systemd[1]: Finished oem-gce-enable-oslogin.service. Feb 9 19:27:45.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:45.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:45.490347 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:27:45.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:45.558000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:27:45.558000 audit[1099]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff73055ac0 a2=420 a3=0 items=0 ppid=1069 pid=1099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:45.558000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:27:45.560515 augenrules[1099]: No rules Feb 9 19:27:45.561904 systemd[1]: Finished audit-rules.service. Feb 9 19:27:45.571130 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:27:45.574065 systemd-timesyncd[1075]: Contacted time server 169.254.169.254:123 (169.254.169.254). Feb 9 19:27:45.574611 systemd-timesyncd[1075]: Initial clock synchronization to Fri 2024-02-09 19:27:45.961200 UTC. Feb 9 19:27:45.580821 systemd[1]: Reached target time-set.target. Feb 9 19:27:45.587298 systemd-resolved[1074]: Positive Trust Anchors: Feb 9 19:27:45.587326 systemd-resolved[1074]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:27:45.587376 systemd-resolved[1074]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:27:45.621654 systemd-resolved[1074]: Defaulting to hostname 'linux'. Feb 9 19:27:45.624688 systemd[1]: Started systemd-resolved.service. Feb 9 19:27:45.633168 systemd[1]: Reached target network.target. Feb 9 19:27:45.641973 systemd[1]: Reached target nss-lookup.target. Feb 9 19:27:45.698533 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:27:45.699566 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:27:45.701097 ldconfig[1055]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:27:45.708430 systemd[1]: Finished ldconfig.service. Feb 9 19:27:45.717759 systemd[1]: Starting systemd-update-done.service... Feb 9 19:27:45.728039 systemd[1]: Finished systemd-update-done.service. Feb 9 19:27:45.737127 systemd[1]: Reached target sysinit.target. Feb 9 19:27:45.746121 systemd[1]: Started motdgen.path. Feb 9 19:27:45.754047 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:27:45.764243 systemd[1]: Started logrotate.timer. Feb 9 19:27:45.771129 systemd[1]: Started mdadm.timer. Feb 9 19:27:45.778008 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:27:45.787004 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:27:45.787073 systemd[1]: Reached target paths.target. Feb 9 19:27:45.793982 systemd[1]: Reached target timers.target. Feb 9 19:27:45.801492 systemd[1]: Listening on dbus.socket. Feb 9 19:27:45.810372 systemd[1]: Starting docker.socket... Feb 9 19:27:45.822396 systemd[1]: Listening on sshd.socket. Feb 9 19:27:45.830165 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:27:45.830973 systemd[1]: Listening on docker.socket. Feb 9 19:27:45.838151 systemd[1]: Reached target sockets.target. Feb 9 19:27:45.846966 systemd[1]: Reached target basic.target. Feb 9 19:27:45.854033 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:27:45.854084 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:27:45.855840 systemd[1]: Starting containerd.service... Feb 9 19:27:45.864485 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:27:45.876283 systemd[1]: Starting dbus.service... Feb 9 19:27:45.883788 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:27:45.893800 systemd[1]: Starting extend-filesystems.service... Feb 9 19:27:45.900975 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:27:45.903201 systemd[1]: Starting motdgen.service... Feb 9 19:27:45.909840 jq[1112]: false Feb 9 19:27:45.911006 systemd-networkd[1026]: eth0: Gained IPv6LL Feb 9 19:27:45.912866 systemd[1]: Starting oem-gce.service... Feb 9 19:27:45.922824 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:27:45.932885 systemd[1]: Starting prepare-critools.service... Feb 9 19:27:45.941899 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:27:45.950838 systemd[1]: Starting sshd-keygen.service... Feb 9 19:27:45.961857 systemd[1]: Starting systemd-logind.service... Feb 9 19:27:45.969960 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:27:45.970084 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 9 19:27:45.970924 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:27:45.972240 systemd[1]: Starting update-engine.service... Feb 9 19:27:45.979176 extend-filesystems[1113]: Found sda Feb 9 19:27:45.979176 extend-filesystems[1113]: Found sda1 Feb 9 19:27:45.979176 extend-filesystems[1113]: Found sda2 Feb 9 19:27:46.037994 extend-filesystems[1113]: Found sda3 Feb 9 19:27:46.037994 extend-filesystems[1113]: Found usr Feb 9 19:27:46.037994 extend-filesystems[1113]: Found sda4 Feb 9 19:27:46.037994 extend-filesystems[1113]: Found sda6 Feb 9 19:27:46.037994 extend-filesystems[1113]: Found sda7 Feb 9 19:27:46.037994 extend-filesystems[1113]: Found sda9 Feb 9 19:27:46.037994 extend-filesystems[1113]: Checking size of /dev/sda9 Feb 9 19:27:45.981916 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:27:46.090881 extend-filesystems[1113]: Resized partition /dev/sda9 Feb 9 19:27:46.100192 jq[1136]: true Feb 9 19:27:45.994335 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:27:45.994629 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:27:46.101240 tar[1141]: ./ Feb 9 19:27:46.101240 tar[1141]: ./macvlan Feb 9 19:27:45.995278 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:27:45.995551 systemd[1]: Finished motdgen.service. Feb 9 19:27:46.015746 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:27:46.102730 jq[1143]: true Feb 9 19:27:46.016060 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:27:46.103502 mkfs.ext4[1146]: mke2fs 1.46.5 (30-Dec-2021) Feb 9 19:27:46.103502 mkfs.ext4[1146]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Feb 9 19:27:46.103502 mkfs.ext4[1146]: Creating filesystem with 262144 4k blocks and 65536 inodes Feb 9 19:27:46.103502 mkfs.ext4[1146]: Filesystem UUID: 6417db1f-5a33-4a47-a7d8-4b7604107caf Feb 9 19:27:46.103502 mkfs.ext4[1146]: Superblock backups stored on blocks: Feb 9 19:27:46.103502 mkfs.ext4[1146]: 32768, 98304, 163840, 229376 Feb 9 19:27:46.103502 mkfs.ext4[1146]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 9 19:27:46.103502 mkfs.ext4[1146]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 9 19:27:46.103502 mkfs.ext4[1146]: Creating journal (8192 blocks): done Feb 9 19:27:46.103502 mkfs.ext4[1146]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 9 19:27:46.113311 extend-filesystems[1156]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:27:46.130262 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 9 19:27:46.162043 update_engine[1133]: I0209 19:27:46.161535 1133 main.cc:92] Flatcar Update Engine starting Feb 9 19:27:46.180138 dbus-daemon[1111]: [system] SELinux support is enabled Feb 9 19:27:46.180432 systemd[1]: Started dbus.service. Feb 9 19:27:46.184266 umount[1170]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Feb 9 19:27:46.186291 dbus-daemon[1111]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1026 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:27:46.194257 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:27:46.194341 systemd[1]: Reached target system-config.target. Feb 9 19:27:46.199036 update_engine[1133]: I0209 19:27:46.198993 1133 update_check_scheduler.cc:74] Next update check in 4m13s Feb 9 19:27:46.203324 tar[1142]: crictl Feb 9 19:27:46.203042 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:27:46.203094 systemd[1]: Reached target user-config.target. Feb 9 19:27:46.211906 kernel: loop0: detected capacity change from 0 to 2097152 Feb 9 19:27:46.220118 systemd[1]: Started update-engine.service. Feb 9 19:27:46.220853 dbus-daemon[1111]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:27:46.238205 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 9 19:27:46.239323 systemd[1]: Started locksmithd.service. Feb 9 19:27:46.250381 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:27:46.253850 extend-filesystems[1156]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 9 19:27:46.253850 extend-filesystems[1156]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 9 19:27:46.253850 extend-filesystems[1156]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 9 19:27:46.367839 kernel: EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:27:46.258682 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:27:46.376315 extend-filesystems[1113]: Resized filesystem in /dev/sda9 Feb 9 19:27:46.385394 bash[1175]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:27:46.385569 env[1145]: time="2024-02-09T19:27:46.298374187Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:27:46.258974 systemd[1]: Finished extend-filesystems.service. Feb 9 19:27:46.268606 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:27:46.394062 coreos-metadata[1110]: Feb 09 19:27:46.393 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 9 19:27:46.396476 systemd-logind[1131]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:27:46.396523 systemd-logind[1131]: Watching system buttons on /dev/input/event3 (Sleep Button) Feb 9 19:27:46.397882 coreos-metadata[1110]: Feb 09 19:27:46.397 INFO Fetch failed with 404: resource not found Feb 9 19:27:46.397882 coreos-metadata[1110]: Feb 09 19:27:46.397 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 9 19:27:46.396560 systemd-logind[1131]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:27:46.398405 coreos-metadata[1110]: Feb 09 19:27:46.398 INFO Fetch successful Feb 9 19:27:46.398405 coreos-metadata[1110]: Feb 09 19:27:46.398 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 9 19:27:46.399074 coreos-metadata[1110]: Feb 09 19:27:46.398 INFO Fetch failed with 404: resource not found Feb 9 19:27:46.399074 coreos-metadata[1110]: Feb 09 19:27:46.398 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 9 19:27:46.399808 coreos-metadata[1110]: Feb 09 19:27:46.399 INFO Fetch failed with 404: resource not found Feb 9 19:27:46.399808 coreos-metadata[1110]: Feb 09 19:27:46.399 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 9 19:27:46.401194 coreos-metadata[1110]: Feb 09 19:27:46.400 INFO Fetch successful Feb 9 19:27:46.408339 systemd-logind[1131]: New seat seat0. Feb 9 19:27:46.410077 unknown[1110]: wrote ssh authorized keys file for user: core Feb 9 19:27:46.433757 systemd[1]: Started systemd-logind.service. Feb 9 19:27:46.493894 update-ssh-keys[1186]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:27:46.495283 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:27:46.520651 env[1145]: time="2024-02-09T19:27:46.520579243Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:27:46.520916 env[1145]: time="2024-02-09T19:27:46.520789451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:46.529399 tar[1141]: ./static Feb 9 19:27:46.556379 env[1145]: time="2024-02-09T19:27:46.556317585Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:27:46.557096 env[1145]: time="2024-02-09T19:27:46.557059881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:46.557635 env[1145]: time="2024-02-09T19:27:46.557599379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:27:46.561887 env[1145]: time="2024-02-09T19:27:46.561849201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:46.562047 env[1145]: time="2024-02-09T19:27:46.562022174Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:27:46.562145 env[1145]: time="2024-02-09T19:27:46.562123762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:46.562437 env[1145]: time="2024-02-09T19:27:46.562409158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:46.564219 env[1145]: time="2024-02-09T19:27:46.564185013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:27:46.571044 env[1145]: time="2024-02-09T19:27:46.570988771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:27:46.573063 env[1145]: time="2024-02-09T19:27:46.573016194Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:27:46.573385 env[1145]: time="2024-02-09T19:27:46.573354366Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:27:46.575502 env[1145]: time="2024-02-09T19:27:46.575461304Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:27:46.585559 env[1145]: time="2024-02-09T19:27:46.585502588Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:27:46.585802 env[1145]: time="2024-02-09T19:27:46.585776988Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:27:46.585934 env[1145]: time="2024-02-09T19:27:46.585909613Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:27:46.586088 env[1145]: time="2024-02-09T19:27:46.586065712Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:27:46.586266 env[1145]: time="2024-02-09T19:27:46.586246055Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:27:46.586370 env[1145]: time="2024-02-09T19:27:46.586351379Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:27:46.586500 env[1145]: time="2024-02-09T19:27:46.586467149Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:27:46.586686 env[1145]: time="2024-02-09T19:27:46.586663874Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:27:46.586786 env[1145]: time="2024-02-09T19:27:46.586767930Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:27:46.586929 env[1145]: time="2024-02-09T19:27:46.586901787Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:27:46.587049 env[1145]: time="2024-02-09T19:27:46.587026836Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:27:46.587155 env[1145]: time="2024-02-09T19:27:46.587134784Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:27:46.587420 env[1145]: time="2024-02-09T19:27:46.587395824Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:27:46.587691 env[1145]: time="2024-02-09T19:27:46.587665126Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:27:46.588367 env[1145]: time="2024-02-09T19:27:46.588337037Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:27:46.588539 env[1145]: time="2024-02-09T19:27:46.588512746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.588670 env[1145]: time="2024-02-09T19:27:46.588646084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:27:46.588859 env[1145]: time="2024-02-09T19:27:46.588836058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.589070 env[1145]: time="2024-02-09T19:27:46.589043423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.589193 env[1145]: time="2024-02-09T19:27:46.589170345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.589303 env[1145]: time="2024-02-09T19:27:46.589282263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.589401 env[1145]: time="2024-02-09T19:27:46.589383069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.589492 env[1145]: time="2024-02-09T19:27:46.589474037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.589597 env[1145]: time="2024-02-09T19:27:46.589574748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.589729 env[1145]: time="2024-02-09T19:27:46.589706325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.589873 env[1145]: time="2024-02-09T19:27:46.589850577Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:27:46.590184 env[1145]: time="2024-02-09T19:27:46.590155484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.593271 env[1145]: time="2024-02-09T19:27:46.593226427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.597909 env[1145]: time="2024-02-09T19:27:46.597864371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.598110 env[1145]: time="2024-02-09T19:27:46.598083845Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:27:46.599907 env[1145]: time="2024-02-09T19:27:46.599863984Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:27:46.600063 env[1145]: time="2024-02-09T19:27:46.600038618Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:27:46.600212 env[1145]: time="2024-02-09T19:27:46.600189770Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:27:46.600352 env[1145]: time="2024-02-09T19:27:46.600328900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:27:46.600900 env[1145]: time="2024-02-09T19:27:46.600775938Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:27:46.604807 env[1145]: time="2024-02-09T19:27:46.603015159Z" level=info msg="Connect containerd service" Feb 9 19:27:46.604807 env[1145]: time="2024-02-09T19:27:46.603092319Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:27:46.605485 env[1145]: time="2024-02-09T19:27:46.605446201Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:27:46.613248 env[1145]: time="2024-02-09T19:27:46.613163748Z" level=info msg="Start subscribing containerd event" Feb 9 19:27:46.613497 env[1145]: time="2024-02-09T19:27:46.613462840Z" level=info msg="Start recovering state" Feb 9 19:27:46.616326 env[1145]: time="2024-02-09T19:27:46.616285735Z" level=info msg="Start event monitor" Feb 9 19:27:46.616509 env[1145]: time="2024-02-09T19:27:46.616487587Z" level=info msg="Start snapshots syncer" Feb 9 19:27:46.621931 env[1145]: time="2024-02-09T19:27:46.621878811Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:27:46.622140 env[1145]: time="2024-02-09T19:27:46.622108342Z" level=info msg="Start streaming server" Feb 9 19:27:46.623134 env[1145]: time="2024-02-09T19:27:46.623083210Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:27:46.623430 env[1145]: time="2024-02-09T19:27:46.623409574Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:27:46.625967 systemd[1]: Started containerd.service. Feb 9 19:27:46.662078 dbus-daemon[1111]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:27:46.662502 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:27:46.663507 dbus-daemon[1111]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1178 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:27:46.676146 systemd[1]: Starting polkit.service... Feb 9 19:27:46.691568 env[1145]: time="2024-02-09T19:27:46.691514461Z" level=info msg="containerd successfully booted in 0.394309s" Feb 9 19:27:46.762857 tar[1141]: ./vlan Feb 9 19:27:46.803695 polkitd[1189]: Started polkitd version 121 Feb 9 19:27:46.839921 polkitd[1189]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:27:46.840025 polkitd[1189]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:27:46.844171 polkitd[1189]: Finished loading, compiling and executing 2 rules Feb 9 19:27:46.844913 dbus-daemon[1111]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:27:46.845157 systemd[1]: Started polkit.service. Feb 9 19:27:46.845560 polkitd[1189]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:27:46.877171 systemd-hostnamed[1178]: Hostname set to (transient) Feb 9 19:27:46.880505 systemd-resolved[1074]: System hostname changed to 'ci-3510-3-2-2f0e657b66ac8418d690.c.flatcar-212911.internal'. Feb 9 19:27:46.975429 tar[1141]: ./portmap Feb 9 19:27:47.133662 tar[1141]: ./host-local Feb 9 19:27:47.221509 tar[1141]: ./vrf Feb 9 19:27:47.324408 tar[1141]: ./bridge Feb 9 19:27:47.361428 systemd[1]: Finished prepare-critools.service. Feb 9 19:27:47.404546 tar[1141]: ./tuning Feb 9 19:27:47.458749 tar[1141]: ./firewall Feb 9 19:27:47.536509 tar[1141]: ./host-device Feb 9 19:27:47.606623 tar[1141]: ./sbr Feb 9 19:27:47.702197 tar[1141]: ./loopback Feb 9 19:27:47.783621 tar[1141]: ./dhcp Feb 9 19:27:48.032896 tar[1141]: ./ptp Feb 9 19:27:48.130258 tar[1141]: ./ipvlan Feb 9 19:27:48.248527 tar[1141]: ./bandwidth Feb 9 19:27:48.370198 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:27:50.124660 sshd_keygen[1138]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:27:50.167691 systemd[1]: Finished sshd-keygen.service. Feb 9 19:27:50.177543 systemd[1]: Starting issuegen.service... Feb 9 19:27:50.194992 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:27:50.195255 systemd[1]: Finished issuegen.service. Feb 9 19:27:50.198386 locksmithd[1176]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:27:50.205638 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:27:50.221416 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:27:50.232544 systemd[1]: Started getty@tty1.service. Feb 9 19:27:50.242524 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:27:50.251351 systemd[1]: Reached target getty.target. Feb 9 19:27:52.039568 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Feb 9 19:27:54.089841 kernel: loop0: detected capacity change from 0 to 2097152 Feb 9 19:27:54.115456 systemd-nspawn[1218]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Feb 9 19:27:54.115456 systemd-nspawn[1218]: Press ^] three times within 1s to kill container. Feb 9 19:27:54.131803 kernel: EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:27:54.213607 systemd[1]: Started oem-gce.service. Feb 9 19:27:54.222431 systemd[1]: Reached target multi-user.target. Feb 9 19:27:54.233270 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:27:54.247904 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:27:54.248152 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:27:54.258201 systemd[1]: Startup finished in 1.085s (kernel) + 8.810s (initrd) + 15.614s (userspace) = 25.510s. Feb 9 19:27:54.291198 systemd-nspawn[1218]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 9 19:27:54.291198 systemd-nspawn[1218]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 9 19:27:54.291418 systemd-nspawn[1218]: + /usr/bin/google_instance_setup Feb 9 19:27:55.040741 instance-setup[1224]: INFO Running google_set_multiqueue. Feb 9 19:27:55.059199 instance-setup[1224]: INFO Set channels for eth0 to 2. Feb 9 19:27:55.063296 instance-setup[1224]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 9 19:27:55.064843 instance-setup[1224]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 9 19:27:55.065367 instance-setup[1224]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 9 19:27:55.066957 instance-setup[1224]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 9 19:27:55.067333 instance-setup[1224]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 9 19:27:55.068760 instance-setup[1224]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 9 19:27:55.069301 instance-setup[1224]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 9 19:27:55.070844 instance-setup[1224]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 9 19:27:55.083071 instance-setup[1224]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 9 19:27:55.083505 instance-setup[1224]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 9 19:27:55.131120 systemd-nspawn[1218]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 9 19:27:55.478931 startup-script[1255]: INFO Starting startup scripts. Feb 9 19:27:55.492868 startup-script[1255]: INFO No startup scripts found in metadata. Feb 9 19:27:55.493036 startup-script[1255]: INFO Finished running startup scripts. Feb 9 19:27:55.530115 systemd-nspawn[1218]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 9 19:27:55.530115 systemd-nspawn[1218]: + daemon_pids=() Feb 9 19:27:55.530399 systemd-nspawn[1218]: + for d in accounts clock_skew network Feb 9 19:27:55.530399 systemd-nspawn[1218]: + daemon_pids+=($!) Feb 9 19:27:55.530523 systemd-nspawn[1218]: + for d in accounts clock_skew network Feb 9 19:27:55.530673 systemd-nspawn[1218]: + daemon_pids+=($!) Feb 9 19:27:55.530765 systemd-nspawn[1218]: + for d in accounts clock_skew network Feb 9 19:27:55.530941 systemd-nspawn[1218]: + daemon_pids+=($!) Feb 9 19:27:55.531045 systemd-nspawn[1218]: + NOTIFY_SOCKET=/run/systemd/notify Feb 9 19:27:55.531045 systemd-nspawn[1218]: + /usr/bin/systemd-notify --ready Feb 9 19:27:55.531470 systemd-nspawn[1218]: + /usr/bin/google_accounts_daemon Feb 9 19:27:55.531764 systemd-nspawn[1218]: + /usr/bin/google_clock_skew_daemon Feb 9 19:27:55.532152 systemd-nspawn[1218]: + /usr/bin/google_network_daemon Feb 9 19:27:55.590172 systemd-nspawn[1218]: + wait -n 36 37 38 Feb 9 19:27:55.999077 systemd[1]: Created slice system-sshd.slice. Feb 9 19:27:56.003381 systemd[1]: Started sshd@0-10.128.0.112:22-147.75.109.163:56812.service. Feb 9 19:27:56.165865 google-clock-skew[1259]: INFO Starting Google Clock Skew daemon. Feb 9 19:27:56.181295 google-clock-skew[1259]: INFO Clock drift token has changed: 0. Feb 9 19:27:56.192459 systemd-nspawn[1218]: hwclock: Cannot access the Hardware Clock via any known method. Feb 9 19:27:56.193212 systemd-nspawn[1218]: hwclock: Use the --verbose option to see the details of our search for an access method. Feb 9 19:27:56.194268 google-clock-skew[1259]: WARNING Failed to sync system time with hardware clock. Feb 9 19:27:56.314015 google-networking[1260]: INFO Starting Google Networking daemon. Feb 9 19:27:56.343955 sshd[1265]: Accepted publickey for core from 147.75.109.163 port 56812 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:56.347385 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:56.366653 systemd[1]: Created slice user-500.slice. Feb 9 19:27:56.369159 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:27:56.376487 systemd-logind[1131]: New session 1 of user core. Feb 9 19:27:56.384766 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:27:56.387273 systemd[1]: Starting user@500.service... Feb 9 19:27:56.395391 groupadd[1273]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 9 19:27:56.400387 groupadd[1273]: group added to /etc/gshadow: name=google-sudoers Feb 9 19:27:56.402936 (systemd)[1275]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:56.406163 groupadd[1273]: new group: name=google-sudoers, GID=1000 Feb 9 19:27:56.424223 google-accounts[1258]: INFO Starting Google Accounts daemon. Feb 9 19:27:56.458896 google-accounts[1258]: WARNING OS Login not installed. Feb 9 19:27:56.460127 google-accounts[1258]: INFO Creating a new user account for 0. Feb 9 19:27:56.467154 systemd-nspawn[1218]: useradd: invalid user name '0': use --badname to ignore Feb 9 19:27:56.468079 google-accounts[1258]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 9 19:27:56.527514 systemd[1275]: Queued start job for default target default.target. Feb 9 19:27:56.528321 systemd[1275]: Reached target paths.target. Feb 9 19:27:56.528359 systemd[1275]: Reached target sockets.target. Feb 9 19:27:56.528382 systemd[1275]: Reached target timers.target. Feb 9 19:27:56.528403 systemd[1275]: Reached target basic.target. Feb 9 19:27:56.528480 systemd[1275]: Reached target default.target. Feb 9 19:27:56.528534 systemd[1275]: Startup finished in 113ms. Feb 9 19:27:56.529220 systemd[1]: Started user@500.service. Feb 9 19:27:56.530954 systemd[1]: Started session-1.scope. Feb 9 19:27:56.754919 systemd[1]: Started sshd@1-10.128.0.112:22-147.75.109.163:56828.service. Feb 9 19:27:57.042114 sshd[1293]: Accepted publickey for core from 147.75.109.163 port 56828 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:57.044214 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:57.051121 systemd-logind[1131]: New session 2 of user core. Feb 9 19:27:57.051926 systemd[1]: Started session-2.scope. Feb 9 19:27:57.260729 sshd[1293]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:57.264767 systemd[1]: sshd@1-10.128.0.112:22-147.75.109.163:56828.service: Deactivated successfully. Feb 9 19:27:57.265923 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:27:57.266831 systemd-logind[1131]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:27:57.268102 systemd-logind[1131]: Removed session 2. Feb 9 19:27:57.308014 systemd[1]: Started sshd@2-10.128.0.112:22-147.75.109.163:56842.service. Feb 9 19:27:57.599807 sshd[1299]: Accepted publickey for core from 147.75.109.163 port 56842 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:57.601495 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:57.607879 systemd-logind[1131]: New session 3 of user core. Feb 9 19:27:57.608127 systemd[1]: Started session-3.scope. Feb 9 19:27:57.811043 sshd[1299]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:57.815367 systemd[1]: sshd@2-10.128.0.112:22-147.75.109.163:56842.service: Deactivated successfully. Feb 9 19:27:57.816427 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:27:57.817396 systemd-logind[1131]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:27:57.818553 systemd-logind[1131]: Removed session 3. Feb 9 19:27:57.856133 systemd[1]: Started sshd@3-10.128.0.112:22-147.75.109.163:56854.service. Feb 9 19:27:58.142419 sshd[1306]: Accepted publickey for core from 147.75.109.163 port 56854 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:58.144469 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:58.151327 systemd-logind[1131]: New session 4 of user core. Feb 9 19:27:58.152128 systemd[1]: Started session-4.scope. Feb 9 19:27:58.358057 sshd[1306]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:58.362140 systemd[1]: sshd@3-10.128.0.112:22-147.75.109.163:56854.service: Deactivated successfully. Feb 9 19:27:58.363201 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:27:58.364189 systemd-logind[1131]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:27:58.365534 systemd-logind[1131]: Removed session 4. Feb 9 19:27:58.404987 systemd[1]: Started sshd@4-10.128.0.112:22-147.75.109.163:56856.service. Feb 9 19:27:58.696500 sshd[1312]: Accepted publickey for core from 147.75.109.163 port 56856 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:58.698561 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:58.705394 systemd[1]: Started session-5.scope. Feb 9 19:27:58.706029 systemd-logind[1131]: New session 5 of user core. Feb 9 19:27:58.895799 sudo[1315]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:27:58.896198 sudo[1315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:27:59.490637 systemd[1]: Reloading. Feb 9 19:27:59.570898 /usr/lib/systemd/system-generators/torcx-generator[1344]: time="2024-02-09T19:27:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:27:59.570947 /usr/lib/systemd/system-generators/torcx-generator[1344]: time="2024-02-09T19:27:59Z" level=info msg="torcx already run" Feb 9 19:27:59.712752 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:27:59.712808 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:27:59.740062 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:27:59.876204 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:27:59.886543 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:27:59.887959 systemd[1]: Reached target network-online.target. Feb 9 19:27:59.890553 systemd[1]: Started kubelet.service. Feb 9 19:27:59.912597 systemd[1]: Starting coreos-metadata.service... Feb 9 19:27:59.995308 coreos-metadata[1397]: Feb 09 19:27:59.995 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 9 19:28:00.000146 coreos-metadata[1397]: Feb 09 19:28:00.000 INFO Fetch successful Feb 9 19:28:00.000146 coreos-metadata[1397]: Feb 09 19:28:00.000 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 9 19:28:00.001149 coreos-metadata[1397]: Feb 09 19:28:00.001 INFO Fetch successful Feb 9 19:28:00.001149 coreos-metadata[1397]: Feb 09 19:28:00.001 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 9 19:28:00.002054 coreos-metadata[1397]: Feb 09 19:28:00.001 INFO Fetch successful Feb 9 19:28:00.002054 coreos-metadata[1397]: Feb 09 19:28:00.001 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 9 19:28:00.004708 coreos-metadata[1397]: Feb 09 19:28:00.002 INFO Fetch successful Feb 9 19:28:00.009534 kubelet[1389]: E0209 19:28:00.009291 1389 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:28:00.014522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:28:00.014767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:28:00.017946 systemd[1]: Finished coreos-metadata.service. Feb 9 19:28:00.458092 systemd[1]: Stopped kubelet.service. Feb 9 19:28:00.480473 systemd[1]: Reloading. Feb 9 19:28:00.599754 /usr/lib/systemd/system-generators/torcx-generator[1456]: time="2024-02-09T19:28:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:28:00.606931 /usr/lib/systemd/system-generators/torcx-generator[1456]: time="2024-02-09T19:28:00Z" level=info msg="torcx already run" Feb 9 19:28:00.697111 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:28:00.697139 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:28:00.724083 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:28:00.844304 systemd[1]: Started kubelet.service. Feb 9 19:28:00.905542 kubelet[1496]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:28:00.905542 kubelet[1496]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:28:00.906089 kubelet[1496]: I0209 19:28:00.905608 1496 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:28:00.907439 kubelet[1496]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:28:00.907439 kubelet[1496]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:28:01.373232 kubelet[1496]: I0209 19:28:01.373177 1496 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:28:01.373232 kubelet[1496]: I0209 19:28:01.373215 1496 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:28:01.373559 kubelet[1496]: I0209 19:28:01.373519 1496 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:28:01.376710 kubelet[1496]: I0209 19:28:01.376678 1496 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:28:01.382256 kubelet[1496]: I0209 19:28:01.382200 1496 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:28:01.382667 kubelet[1496]: I0209 19:28:01.382643 1496 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:28:01.382837 kubelet[1496]: I0209 19:28:01.382816 1496 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:28:01.383021 kubelet[1496]: I0209 19:28:01.382860 1496 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:28:01.383021 kubelet[1496]: I0209 19:28:01.382880 1496 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:28:01.383165 kubelet[1496]: I0209 19:28:01.383045 1496 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:28:01.387317 kubelet[1496]: I0209 19:28:01.387287 1496 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:28:01.387317 kubelet[1496]: I0209 19:28:01.387320 1496 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:28:01.387555 kubelet[1496]: I0209 19:28:01.387352 1496 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:28:01.387555 kubelet[1496]: I0209 19:28:01.387374 1496 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:28:01.388262 kubelet[1496]: E0209 19:28:01.388079 1496 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:01.388262 kubelet[1496]: E0209 19:28:01.388240 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:01.388608 kubelet[1496]: I0209 19:28:01.388582 1496 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:28:01.392998 kubelet[1496]: W0209 19:28:01.392967 1496 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:28:01.393663 kubelet[1496]: I0209 19:28:01.393634 1496 server.go:1186] "Started kubelet" Feb 9 19:28:01.393965 kubelet[1496]: I0209 19:28:01.393942 1496 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:28:01.395146 kubelet[1496]: I0209 19:28:01.395120 1496 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:28:01.396798 kubelet[1496]: E0209 19:28:01.395860 1496 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:28:01.396798 kubelet[1496]: E0209 19:28:01.395893 1496 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:28:01.412939 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:28:01.418512 kubelet[1496]: E0209 19:28:01.418356 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b248763cfffb09", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 393589001, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 393589001, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:01.420013 kubelet[1496]: W0209 19:28:01.419980 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:28:01.420253 kubelet[1496]: E0209 19:28:01.420235 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:28:01.420456 kubelet[1496]: W0209 19:28:01.420437 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.128.0.112" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:28:01.420585 kubelet[1496]: E0209 19:28:01.420571 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.112" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:28:01.420971 kubelet[1496]: I0209 19:28:01.419279 1496 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:28:01.423043 kubelet[1496]: I0209 19:28:01.423014 1496 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:28:01.423423 kubelet[1496]: I0209 19:28:01.423399 1496 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:28:01.426895 kubelet[1496]: E0209 19:28:01.425993 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b248763d22eaf0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 395878640, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 395878640, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:01.427144 kubelet[1496]: E0209 19:28:01.427036 1496 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.128.0.112" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:28:01.427144 kubelet[1496]: W0209 19:28:01.427116 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:28:01.427144 kubelet[1496]: E0209 19:28:01.427138 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:28:01.464709 kubelet[1496]: E0209 19:28:01.464570 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641235961", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.112 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463015777, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463015777, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:01.465595 kubelet[1496]: I0209 19:28:01.465557 1496 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:28:01.465595 kubelet[1496]: I0209 19:28:01.465584 1496 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:28:01.465810 kubelet[1496]: I0209 19:28:01.465609 1496 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:28:01.466648 kubelet[1496]: E0209 19:28:01.466519 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641238170", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.112 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463026032, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463026032, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:01.468057 kubelet[1496]: E0209 19:28:01.467960 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b24876412396a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.112 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463031463, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463031463, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:01.468905 kubelet[1496]: I0209 19:28:01.468883 1496 policy_none.go:49] "None policy: Start" Feb 9 19:28:01.471585 kubelet[1496]: I0209 19:28:01.471566 1496 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:28:01.471759 kubelet[1496]: I0209 19:28:01.471745 1496 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:28:01.480976 systemd[1]: Created slice kubepods.slice. Feb 9 19:28:01.487978 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:28:01.492358 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:28:01.498127 kubelet[1496]: I0209 19:28:01.498094 1496 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:28:01.498649 kubelet[1496]: I0209 19:28:01.498626 1496 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:28:01.500951 kubelet[1496]: E0209 19:28:01.500308 1496 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.112\" not found" Feb 9 19:28:01.503608 kubelet[1496]: E0209 19:28:01.503198 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b24876436a7559", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 501230425, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 501230425, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:01.524797 kubelet[1496]: I0209 19:28:01.524736 1496 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.112" Feb 9 19:28:01.526630 kubelet[1496]: E0209 19:28:01.526525 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641235961", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.112 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463015777, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 524694027, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641235961" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:01.526973 kubelet[1496]: E0209 19:28:01.526944 1496 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.112" Feb 9 19:28:01.528089 kubelet[1496]: E0209 19:28:01.528000 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641238170", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.112 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463026032, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 524700246, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641238170" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:01.529518 kubelet[1496]: E0209 19:28:01.529416 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b24876412396a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.112 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463031463, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 524703336, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b24876412396a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:01.629133 kubelet[1496]: E0209 19:28:01.628899 1496 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.128.0.112" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:28:01.639399 kubelet[1496]: I0209 19:28:01.639352 1496 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:28:01.675733 kubelet[1496]: I0209 19:28:01.675688 1496 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:28:01.675733 kubelet[1496]: I0209 19:28:01.675722 1496 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:28:01.675733 kubelet[1496]: I0209 19:28:01.675750 1496 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:28:01.676178 kubelet[1496]: E0209 19:28:01.675923 1496 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:28:01.677922 kubelet[1496]: W0209 19:28:01.677890 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:28:01.678132 kubelet[1496]: E0209 19:28:01.678112 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:28:01.727993 kubelet[1496]: I0209 19:28:01.727942 1496 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.112" Feb 9 19:28:01.729657 kubelet[1496]: E0209 19:28:01.729550 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641235961", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.112 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463015777, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 727876547, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641235961" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:01.730080 kubelet[1496]: E0209 19:28:01.729978 1496 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.112" Feb 9 19:28:01.730896 kubelet[1496]: E0209 19:28:01.730796 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641238170", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.112 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463026032, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 727894984, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641238170" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:01.796072 kubelet[1496]: E0209 19:28:01.795948 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b24876412396a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.112 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463031463, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 727908700, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b24876412396a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:02.030859 kubelet[1496]: E0209 19:28:02.030802 1496 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.128.0.112" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:28:02.131664 kubelet[1496]: I0209 19:28:02.131608 1496 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.112" Feb 9 19:28:02.133821 kubelet[1496]: E0209 19:28:02.133777 1496 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.112" Feb 9 19:28:02.133985 kubelet[1496]: E0209 19:28:02.133690 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641235961", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.112 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463015777, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 2, 131545963, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641235961" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:02.196114 kubelet[1496]: E0209 19:28:02.195979 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641238170", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.112 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463026032, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 2, 131561699, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641238170" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:02.236572 kubelet[1496]: W0209 19:28:02.236526 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.128.0.112" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:28:02.236572 kubelet[1496]: E0209 19:28:02.236573 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.112" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:28:02.378031 kubelet[1496]: W0209 19:28:02.377882 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:28:02.378031 kubelet[1496]: E0209 19:28:02.377929 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:28:02.389323 kubelet[1496]: E0209 19:28:02.389247 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:02.396378 kubelet[1496]: E0209 19:28:02.396254 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b24876412396a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.112 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463031463, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 2, 131566997, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b24876412396a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:02.553834 kubelet[1496]: W0209 19:28:02.553782 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:28:02.553834 kubelet[1496]: E0209 19:28:02.553828 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:28:02.770050 kubelet[1496]: W0209 19:28:02.769968 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:28:02.770050 kubelet[1496]: E0209 19:28:02.770019 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:28:02.832952 kubelet[1496]: E0209 19:28:02.832897 1496 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.128.0.112" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:28:02.934866 kubelet[1496]: I0209 19:28:02.934816 1496 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.112" Feb 9 19:28:02.936350 kubelet[1496]: E0209 19:28:02.936318 1496 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.112" Feb 9 19:28:02.936499 kubelet[1496]: E0209 19:28:02.936252 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641235961", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.112 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463015777, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 2, 934743684, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641235961" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:02.937783 kubelet[1496]: E0209 19:28:02.937658 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641238170", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.112 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463026032, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 2, 934758175, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641238170" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:02.996028 kubelet[1496]: E0209 19:28:02.995898 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b24876412396a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.112 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463031463, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 2, 934777288, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b24876412396a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:03.389589 kubelet[1496]: E0209 19:28:03.389519 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:04.380134 kubelet[1496]: W0209 19:28:04.380049 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:28:04.380134 kubelet[1496]: E0209 19:28:04.380105 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:28:04.390458 kubelet[1496]: E0209 19:28:04.390389 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:04.435645 kubelet[1496]: E0209 19:28:04.435571 1496 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.128.0.112" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:28:04.537798 kubelet[1496]: I0209 19:28:04.537736 1496 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.112" Feb 9 19:28:04.538998 kubelet[1496]: E0209 19:28:04.538946 1496 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.112" Feb 9 19:28:04.539823 kubelet[1496]: E0209 19:28:04.539685 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641235961", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.112 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463015777, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 4, 537675714, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641235961" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:04.541067 kubelet[1496]: E0209 19:28:04.540973 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641238170", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.112 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463026032, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 4, 537692172, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641238170" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:04.542166 kubelet[1496]: E0209 19:28:04.542078 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b24876412396a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.112 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463031463, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 4, 537696921, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b24876412396a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:04.843593 kubelet[1496]: W0209 19:28:04.843545 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.128.0.112" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:28:04.843792 kubelet[1496]: E0209 19:28:04.843615 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.112" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:28:05.082568 kubelet[1496]: W0209 19:28:05.082523 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:28:05.082568 kubelet[1496]: E0209 19:28:05.082568 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:28:05.285980 kubelet[1496]: W0209 19:28:05.285931 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:28:05.285980 kubelet[1496]: E0209 19:28:05.285981 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:28:05.390579 kubelet[1496]: E0209 19:28:05.390506 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:06.391548 kubelet[1496]: E0209 19:28:06.391478 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:07.392485 kubelet[1496]: E0209 19:28:07.392409 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:07.638230 kubelet[1496]: E0209 19:28:07.638166 1496 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.128.0.112" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:28:07.740655 kubelet[1496]: I0209 19:28:07.740605 1496 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.112" Feb 9 19:28:07.742179 kubelet[1496]: E0209 19:28:07.742133 1496 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.112" Feb 9 19:28:07.742314 kubelet[1496]: E0209 19:28:07.742162 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641235961", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.112 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463015777, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 7, 740537835, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641235961" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:07.743413 kubelet[1496]: E0209 19:28:07.743301 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b2487641238170", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.112 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463026032, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 7, 740553368, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b2487641238170" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:07.744531 kubelet[1496]: E0209 19:28:07.744454 1496 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.112.17b24876412396a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.112", UID:"10.128.0.112", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.112 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.112"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 28, 1, 463031463, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 28, 7, 740564323, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.112.17b24876412396a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:28:07.984555 kubelet[1496]: W0209 19:28:07.984504 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:28:07.984555 kubelet[1496]: E0209 19:28:07.984555 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:28:08.393212 kubelet[1496]: E0209 19:28:08.393136 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:08.495370 kubelet[1496]: W0209 19:28:08.495321 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:28:08.495370 kubelet[1496]: E0209 19:28:08.495371 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:28:09.394320 kubelet[1496]: E0209 19:28:09.394231 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:09.844211 kubelet[1496]: W0209 19:28:09.844155 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:28:09.844211 kubelet[1496]: E0209 19:28:09.844211 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:28:10.395350 kubelet[1496]: E0209 19:28:10.395273 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:11.070901 kubelet[1496]: W0209 19:28:11.070856 1496 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.128.0.112" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:28:11.070901 kubelet[1496]: E0209 19:28:11.070902 1496 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.112" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:28:11.376327 kubelet[1496]: I0209 19:28:11.376163 1496 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:28:11.396227 kubelet[1496]: E0209 19:28:11.396148 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:11.500815 kubelet[1496]: E0209 19:28:11.500751 1496 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.112\" not found" Feb 9 19:28:11.782621 kubelet[1496]: E0209 19:28:11.782576 1496 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.128.0.112" not found Feb 9 19:28:12.397105 kubelet[1496]: E0209 19:28:12.397027 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:12.824339 kubelet[1496]: E0209 19:28:12.824279 1496 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.128.0.112" not found Feb 9 19:28:13.397723 kubelet[1496]: E0209 19:28:13.397647 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:14.044725 kubelet[1496]: E0209 19:28:14.044665 1496 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.112\" not found" node="10.128.0.112" Feb 9 19:28:14.143682 kubelet[1496]: I0209 19:28:14.143636 1496 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.112" Feb 9 19:28:14.225729 kubelet[1496]: I0209 19:28:14.225663 1496 kubelet_node_status.go:73] "Successfully registered node" node="10.128.0.112" Feb 9 19:28:14.239941 kubelet[1496]: E0209 19:28:14.239885 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:14.340984 kubelet[1496]: E0209 19:28:14.340837 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:14.364344 sudo[1315]: pam_unix(sudo:session): session closed for user root Feb 9 19:28:14.398549 kubelet[1496]: E0209 19:28:14.398473 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:14.409108 sshd[1312]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:14.413253 systemd[1]: sshd@4-10.128.0.112:22-147.75.109.163:56856.service: Deactivated successfully. Feb 9 19:28:14.414472 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:28:14.415395 systemd-logind[1131]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:28:14.416626 systemd-logind[1131]: Removed session 5. Feb 9 19:28:14.441826 kubelet[1496]: E0209 19:28:14.441762 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:14.542889 kubelet[1496]: E0209 19:28:14.542834 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:14.644181 kubelet[1496]: E0209 19:28:14.644028 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:14.744647 kubelet[1496]: E0209 19:28:14.744573 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:14.845792 kubelet[1496]: E0209 19:28:14.845703 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:14.946940 kubelet[1496]: E0209 19:28:14.946872 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:15.047917 kubelet[1496]: E0209 19:28:15.047849 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:15.148937 kubelet[1496]: E0209 19:28:15.148862 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:15.250005 kubelet[1496]: E0209 19:28:15.249847 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:15.350937 kubelet[1496]: E0209 19:28:15.350869 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:15.398677 kubelet[1496]: E0209 19:28:15.398606 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:15.451757 kubelet[1496]: E0209 19:28:15.451700 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:15.552079 kubelet[1496]: E0209 19:28:15.551927 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:15.652900 kubelet[1496]: E0209 19:28:15.652799 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:15.753524 kubelet[1496]: E0209 19:28:15.753455 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:15.854581 kubelet[1496]: E0209 19:28:15.854419 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:15.955140 kubelet[1496]: E0209 19:28:15.955038 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:16.055909 kubelet[1496]: E0209 19:28:16.055847 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:16.156689 kubelet[1496]: E0209 19:28:16.156541 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:16.257154 kubelet[1496]: E0209 19:28:16.257083 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:16.357730 kubelet[1496]: E0209 19:28:16.357664 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:16.399416 kubelet[1496]: E0209 19:28:16.399342 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:16.458191 kubelet[1496]: E0209 19:28:16.458125 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:16.559067 kubelet[1496]: E0209 19:28:16.558999 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:16.659739 kubelet[1496]: E0209 19:28:16.659675 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:16.760187 kubelet[1496]: E0209 19:28:16.760045 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:16.860912 kubelet[1496]: E0209 19:28:16.860845 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:16.897377 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:28:16.961872 kubelet[1496]: E0209 19:28:16.961793 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:17.062840 kubelet[1496]: E0209 19:28:17.062448 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:17.163362 kubelet[1496]: E0209 19:28:17.163294 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:17.264274 kubelet[1496]: E0209 19:28:17.264209 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:17.365111 kubelet[1496]: E0209 19:28:17.364962 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:17.399553 kubelet[1496]: E0209 19:28:17.399484 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:17.465440 kubelet[1496]: E0209 19:28:17.465368 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:17.566582 kubelet[1496]: E0209 19:28:17.566514 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:17.667485 kubelet[1496]: E0209 19:28:17.667339 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:17.768485 kubelet[1496]: E0209 19:28:17.768421 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:17.869292 kubelet[1496]: E0209 19:28:17.869227 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:17.970339 kubelet[1496]: E0209 19:28:17.970271 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:18.071232 kubelet[1496]: E0209 19:28:18.071168 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:18.172184 kubelet[1496]: E0209 19:28:18.172127 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:18.273131 kubelet[1496]: E0209 19:28:18.272980 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:18.373797 kubelet[1496]: E0209 19:28:18.373721 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:18.400449 kubelet[1496]: E0209 19:28:18.400376 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:18.474111 kubelet[1496]: E0209 19:28:18.474050 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:18.575230 kubelet[1496]: E0209 19:28:18.575085 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:18.675864 kubelet[1496]: E0209 19:28:18.675797 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:18.776555 kubelet[1496]: E0209 19:28:18.776480 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:18.877478 kubelet[1496]: E0209 19:28:18.877332 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:18.978090 kubelet[1496]: E0209 19:28:18.978017 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:19.078573 kubelet[1496]: E0209 19:28:19.078511 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:19.179611 kubelet[1496]: E0209 19:28:19.179546 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:19.280585 kubelet[1496]: E0209 19:28:19.280516 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:19.380782 kubelet[1496]: E0209 19:28:19.380711 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:19.401286 kubelet[1496]: E0209 19:28:19.401208 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:19.481312 kubelet[1496]: E0209 19:28:19.481172 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:19.582380 kubelet[1496]: E0209 19:28:19.582315 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:19.682923 kubelet[1496]: E0209 19:28:19.682860 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:19.783511 kubelet[1496]: E0209 19:28:19.783364 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:19.883894 kubelet[1496]: E0209 19:28:19.883825 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:19.984853 kubelet[1496]: E0209 19:28:19.984784 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:20.085950 kubelet[1496]: E0209 19:28:20.085784 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:20.186897 kubelet[1496]: E0209 19:28:20.186831 1496 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.112\" not found" Feb 9 19:28:20.287886 kubelet[1496]: I0209 19:28:20.287851 1496 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:28:20.288346 env[1145]: time="2024-02-09T19:28:20.288261470Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:28:20.288899 kubelet[1496]: I0209 19:28:20.288587 1496 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:28:20.401022 kubelet[1496]: I0209 19:28:20.400853 1496 apiserver.go:52] "Watching apiserver" Feb 9 19:28:20.401512 kubelet[1496]: E0209 19:28:20.401453 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:20.406100 kubelet[1496]: I0209 19:28:20.406065 1496 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:28:20.406315 kubelet[1496]: I0209 19:28:20.406292 1496 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:28:20.415178 systemd[1]: Created slice kubepods-besteffort-pod2a8cec04_fe99_411d_b3a9_f5f0adedd0f3.slice. Feb 9 19:28:20.424066 kubelet[1496]: I0209 19:28:20.424031 1496 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:28:20.432496 systemd[1]: Created slice kubepods-burstable-pod00b6b3b6_5c9d_4f53_9213_d3c72edfc1c6.slice. Feb 9 19:28:20.451828 kubelet[1496]: I0209 19:28:20.451785 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-cgroup\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452096 kubelet[1496]: I0209 19:28:20.452066 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-etc-cni-netd\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452238 kubelet[1496]: I0209 19:28:20.452121 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-lib-modules\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452238 kubelet[1496]: I0209 19:28:20.452162 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-config-path\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452238 kubelet[1496]: I0209 19:28:20.452196 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a8cec04-fe99-411d-b3a9-f5f0adedd0f3-xtables-lock\") pod \"kube-proxy-djtvt\" (UID: \"2a8cec04-fe99-411d-b3a9-f5f0adedd0f3\") " pod="kube-system/kube-proxy-djtvt" Feb 9 19:28:20.452238 kubelet[1496]: I0209 19:28:20.452230 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a8cec04-fe99-411d-b3a9-f5f0adedd0f3-lib-modules\") pod \"kube-proxy-djtvt\" (UID: \"2a8cec04-fe99-411d-b3a9-f5f0adedd0f3\") " pod="kube-system/kube-proxy-djtvt" Feb 9 19:28:20.452460 kubelet[1496]: I0209 19:28:20.452267 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-host-proc-sys-net\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452460 kubelet[1496]: I0209 19:28:20.452306 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-host-proc-sys-kernel\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452460 kubelet[1496]: I0209 19:28:20.452342 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a8cec04-fe99-411d-b3a9-f5f0adedd0f3-kube-proxy\") pod \"kube-proxy-djtvt\" (UID: \"2a8cec04-fe99-411d-b3a9-f5f0adedd0f3\") " pod="kube-system/kube-proxy-djtvt" Feb 9 19:28:20.452460 kubelet[1496]: I0209 19:28:20.452383 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-run\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452460 kubelet[1496]: I0209 19:28:20.452420 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cni-path\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452460 kubelet[1496]: I0209 19:28:20.452460 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-xtables-lock\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452752 kubelet[1496]: I0209 19:28:20.452495 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-clustermesh-secrets\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452752 kubelet[1496]: I0209 19:28:20.452532 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r27t8\" (UniqueName: \"kubernetes.io/projected/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-kube-api-access-r27t8\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452752 kubelet[1496]: I0209 19:28:20.452578 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-bpf-maps\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452752 kubelet[1496]: I0209 19:28:20.452629 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-hostproc\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452752 kubelet[1496]: I0209 19:28:20.452667 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-hubble-tls\") pod \"cilium-mzhs4\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " pod="kube-system/cilium-mzhs4" Feb 9 19:28:20.452752 kubelet[1496]: I0209 19:28:20.452713 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnns7\" (UniqueName: \"kubernetes.io/projected/2a8cec04-fe99-411d-b3a9-f5f0adedd0f3-kube-api-access-vnns7\") pod \"kube-proxy-djtvt\" (UID: \"2a8cec04-fe99-411d-b3a9-f5f0adedd0f3\") " pod="kube-system/kube-proxy-djtvt" Feb 9 19:28:20.453177 kubelet[1496]: I0209 19:28:20.452747 1496 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:28:20.729112 env[1145]: time="2024-02-09T19:28:20.729032798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-djtvt,Uid:2a8cec04-fe99-411d-b3a9-f5f0adedd0f3,Namespace:kube-system,Attempt:0,}" Feb 9 19:28:21.042671 env[1145]: time="2024-02-09T19:28:21.042500778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mzhs4,Uid:00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6,Namespace:kube-system,Attempt:0,}" Feb 9 19:28:21.266972 env[1145]: time="2024-02-09T19:28:21.266913498Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:21.269418 env[1145]: time="2024-02-09T19:28:21.269343588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:21.273614 env[1145]: time="2024-02-09T19:28:21.273522275Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:21.274753 env[1145]: time="2024-02-09T19:28:21.274697227Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:21.275825 env[1145]: time="2024-02-09T19:28:21.275762007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:21.278054 env[1145]: time="2024-02-09T19:28:21.277998480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:21.279011 env[1145]: time="2024-02-09T19:28:21.278971573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:21.282881 env[1145]: time="2024-02-09T19:28:21.282825903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:21.310882 env[1145]: time="2024-02-09T19:28:21.306471021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:21.310882 env[1145]: time="2024-02-09T19:28:21.306524223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:21.310882 env[1145]: time="2024-02-09T19:28:21.306543783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:21.310882 env[1145]: time="2024-02-09T19:28:21.306781836Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/820c171ea373bfbec709bfc799bfab405d635e77fc3cdfa3c9c8487f6c7c308f pid=1589 runtime=io.containerd.runc.v2 Feb 9 19:28:21.312120 env[1145]: time="2024-02-09T19:28:21.308579976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:21.312120 env[1145]: time="2024-02-09T19:28:21.308624209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:21.312120 env[1145]: time="2024-02-09T19:28:21.308643462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:21.312120 env[1145]: time="2024-02-09T19:28:21.308892910Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366 pid=1601 runtime=io.containerd.runc.v2 Feb 9 19:28:21.342859 systemd[1]: Started cri-containerd-bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366.scope. Feb 9 19:28:21.352433 systemd[1]: Started cri-containerd-820c171ea373bfbec709bfc799bfab405d635e77fc3cdfa3c9c8487f6c7c308f.scope. Feb 9 19:28:21.387968 kubelet[1496]: E0209 19:28:21.387908 1496 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:21.399973 env[1145]: time="2024-02-09T19:28:21.399887290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mzhs4,Uid:00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\"" Feb 9 19:28:21.402246 kubelet[1496]: E0209 19:28:21.402159 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:21.412604 kubelet[1496]: E0209 19:28:21.412508 1496 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Feb 9 19:28:21.416938 env[1145]: time="2024-02-09T19:28:21.416885211Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:28:21.429845 env[1145]: time="2024-02-09T19:28:21.429751129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-djtvt,Uid:2a8cec04-fe99-411d-b3a9-f5f0adedd0f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"820c171ea373bfbec709bfc799bfab405d635e77fc3cdfa3c9c8487f6c7c308f\"" Feb 9 19:28:21.569932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263562040.mount: Deactivated successfully. Feb 9 19:28:22.403250 kubelet[1496]: E0209 19:28:22.403190 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:23.403925 kubelet[1496]: E0209 19:28:23.403847 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:24.404755 kubelet[1496]: E0209 19:28:24.404702 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:25.405881 kubelet[1496]: E0209 19:28:25.405795 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:26.406791 kubelet[1496]: E0209 19:28:26.406653 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:26.944653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount151807263.mount: Deactivated successfully. Feb 9 19:28:27.407480 kubelet[1496]: E0209 19:28:27.407056 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:28.407580 kubelet[1496]: E0209 19:28:28.407517 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:29.408569 kubelet[1496]: E0209 19:28:29.408513 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:30.224045 env[1145]: time="2024-02-09T19:28:30.223972163Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:30.226903 env[1145]: time="2024-02-09T19:28:30.226850455Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:30.230414 env[1145]: time="2024-02-09T19:28:30.230350924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:30.230843 env[1145]: time="2024-02-09T19:28:30.230801086Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:28:30.233332 env[1145]: time="2024-02-09T19:28:30.233294747Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:28:30.236936 env[1145]: time="2024-02-09T19:28:30.236522870Z" level=info msg="CreateContainer within sandbox \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:28:30.256310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831155829.mount: Deactivated successfully. Feb 9 19:28:30.267371 env[1145]: time="2024-02-09T19:28:30.267317131Z" level=info msg="CreateContainer within sandbox \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740\"" Feb 9 19:28:30.268570 env[1145]: time="2024-02-09T19:28:30.268536338Z" level=info msg="StartContainer for \"750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740\"" Feb 9 19:28:30.307443 systemd[1]: Started cri-containerd-750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740.scope. Feb 9 19:28:30.354809 env[1145]: time="2024-02-09T19:28:30.353911456Z" level=info msg="StartContainer for \"750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740\" returns successfully" Feb 9 19:28:30.364222 systemd[1]: cri-containerd-750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740.scope: Deactivated successfully. Feb 9 19:28:30.410170 kubelet[1496]: E0209 19:28:30.410107 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:31.252237 systemd[1]: run-containerd-runc-k8s.io-750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740-runc.R9COXf.mount: Deactivated successfully. Feb 9 19:28:31.252802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740-rootfs.mount: Deactivated successfully. Feb 9 19:28:31.411046 kubelet[1496]: E0209 19:28:31.410990 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:31.749455 update_engine[1133]: I0209 19:28:31.749361 1133 update_attempter.cc:509] Updating boot flags... Feb 9 19:28:32.208254 env[1145]: time="2024-02-09T19:28:32.208155182Z" level=info msg="shim disconnected" id=750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740 Feb 9 19:28:32.208254 env[1145]: time="2024-02-09T19:28:32.208260567Z" level=warning msg="cleaning up after shim disconnected" id=750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740 namespace=k8s.io Feb 9 19:28:32.209071 env[1145]: time="2024-02-09T19:28:32.208277364Z" level=info msg="cleaning up dead shim" Feb 9 19:28:32.226538 env[1145]: time="2024-02-09T19:28:32.226482710Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1738 runtime=io.containerd.runc.v2\n" Feb 9 19:28:32.411228 kubelet[1496]: E0209 19:28:32.411130 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:32.745664 env[1145]: time="2024-02-09T19:28:32.745585613Z" level=info msg="CreateContainer within sandbox \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:28:32.771299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184633643.mount: Deactivated successfully. Feb 9 19:28:32.793432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277561632.mount: Deactivated successfully. Feb 9 19:28:32.806046 env[1145]: time="2024-02-09T19:28:32.805977367Z" level=info msg="CreateContainer within sandbox \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19\"" Feb 9 19:28:32.807360 env[1145]: time="2024-02-09T19:28:32.807314902Z" level=info msg="StartContainer for \"b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19\"" Feb 9 19:28:32.847760 systemd[1]: Started cri-containerd-b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19.scope. Feb 9 19:28:32.920840 env[1145]: time="2024-02-09T19:28:32.918643978Z" level=info msg="StartContainer for \"b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19\" returns successfully" Feb 9 19:28:32.939875 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:28:32.940643 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:28:32.941978 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:28:32.947428 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:28:32.953128 systemd[1]: cri-containerd-b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19.scope: Deactivated successfully. Feb 9 19:28:32.967018 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:28:33.052823 env[1145]: time="2024-02-09T19:28:33.051879078Z" level=info msg="shim disconnected" id=b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19 Feb 9 19:28:33.052823 env[1145]: time="2024-02-09T19:28:33.051970440Z" level=warning msg="cleaning up after shim disconnected" id=b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19 namespace=k8s.io Feb 9 19:28:33.052823 env[1145]: time="2024-02-09T19:28:33.052002006Z" level=info msg="cleaning up dead shim" Feb 9 19:28:33.067092 env[1145]: time="2024-02-09T19:28:33.067033509Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1803 runtime=io.containerd.runc.v2\n" Feb 9 19:28:33.412697 kubelet[1496]: E0209 19:28:33.412255 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:33.444131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677915077.mount: Deactivated successfully. Feb 9 19:28:33.658277 env[1145]: time="2024-02-09T19:28:33.658205641Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:33.661090 env[1145]: time="2024-02-09T19:28:33.661023518Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:33.663695 env[1145]: time="2024-02-09T19:28:33.663564351Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:33.666085 env[1145]: time="2024-02-09T19:28:33.666023273Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:33.666648 env[1145]: time="2024-02-09T19:28:33.666598764Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:28:33.669588 env[1145]: time="2024-02-09T19:28:33.669547067Z" level=info msg="CreateContainer within sandbox \"820c171ea373bfbec709bfc799bfab405d635e77fc3cdfa3c9c8487f6c7c308f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:28:33.691701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300479119.mount: Deactivated successfully. Feb 9 19:28:33.694202 env[1145]: time="2024-02-09T19:28:33.694118257Z" level=info msg="CreateContainer within sandbox \"820c171ea373bfbec709bfc799bfab405d635e77fc3cdfa3c9c8487f6c7c308f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1c1190bcccac8295b9064f397fa317c0d154ccea5136b31402d3b693c9af8ba1\"" Feb 9 19:28:33.694824 env[1145]: time="2024-02-09T19:28:33.694721540Z" level=info msg="StartContainer for \"1c1190bcccac8295b9064f397fa317c0d154ccea5136b31402d3b693c9af8ba1\"" Feb 9 19:28:33.729995 systemd[1]: Started cri-containerd-1c1190bcccac8295b9064f397fa317c0d154ccea5136b31402d3b693c9af8ba1.scope. Feb 9 19:28:33.754469 env[1145]: time="2024-02-09T19:28:33.754413593Z" level=info msg="CreateContainer within sandbox \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:28:33.792721 env[1145]: time="2024-02-09T19:28:33.792158042Z" level=info msg="CreateContainer within sandbox \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8\"" Feb 9 19:28:33.793290 env[1145]: time="2024-02-09T19:28:33.793248592Z" level=info msg="StartContainer for \"fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8\"" Feb 9 19:28:33.824577 env[1145]: time="2024-02-09T19:28:33.823622736Z" level=info msg="StartContainer for \"1c1190bcccac8295b9064f397fa317c0d154ccea5136b31402d3b693c9af8ba1\" returns successfully" Feb 9 19:28:33.831886 systemd[1]: Started cri-containerd-fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8.scope. Feb 9 19:28:33.898870 env[1145]: time="2024-02-09T19:28:33.898814605Z" level=info msg="StartContainer for \"fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8\" returns successfully" Feb 9 19:28:33.901148 systemd[1]: cri-containerd-fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8.scope: Deactivated successfully. Feb 9 19:28:34.050576 env[1145]: time="2024-02-09T19:28:34.050511147Z" level=info msg="shim disconnected" id=fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8 Feb 9 19:28:34.050576 env[1145]: time="2024-02-09T19:28:34.050573796Z" level=warning msg="cleaning up after shim disconnected" id=fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8 namespace=k8s.io Feb 9 19:28:34.050576 env[1145]: time="2024-02-09T19:28:34.050587390Z" level=info msg="cleaning up dead shim" Feb 9 19:28:34.066969 env[1145]: time="2024-02-09T19:28:34.066907517Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1935 runtime=io.containerd.runc.v2\n" Feb 9 19:28:34.413499 kubelet[1496]: E0209 19:28:34.413347 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:34.443832 systemd[1]: run-containerd-runc-k8s.io-1c1190bcccac8295b9064f397fa317c0d154ccea5136b31402d3b693c9af8ba1-runc.nnFx4a.mount: Deactivated successfully. Feb 9 19:28:34.760096 env[1145]: time="2024-02-09T19:28:34.760035177Z" level=info msg="CreateContainer within sandbox \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:28:34.777157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892940573.mount: Deactivated successfully. Feb 9 19:28:34.787178 env[1145]: time="2024-02-09T19:28:34.787114703Z" level=info msg="CreateContainer within sandbox \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce\"" Feb 9 19:28:34.787963 env[1145]: time="2024-02-09T19:28:34.787905442Z" level=info msg="StartContainer for \"ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce\"" Feb 9 19:28:34.815803 systemd[1]: Started cri-containerd-ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce.scope. Feb 9 19:28:34.851340 systemd[1]: cri-containerd-ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce.scope: Deactivated successfully. Feb 9 19:28:34.857436 env[1145]: time="2024-02-09T19:28:34.857286982Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00b6b3b6_5c9d_4f53_9213_d3c72edfc1c6.slice/cri-containerd-ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce.scope/memory.events\": no such file or directory" Feb 9 19:28:34.860087 env[1145]: time="2024-02-09T19:28:34.860017414Z" level=info msg="StartContainer for \"ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce\" returns successfully" Feb 9 19:28:34.894424 env[1145]: time="2024-02-09T19:28:34.894355800Z" level=info msg="shim disconnected" id=ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce Feb 9 19:28:34.894424 env[1145]: time="2024-02-09T19:28:34.894427151Z" level=warning msg="cleaning up after shim disconnected" id=ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce namespace=k8s.io Feb 9 19:28:34.894831 env[1145]: time="2024-02-09T19:28:34.894441965Z" level=info msg="cleaning up dead shim" Feb 9 19:28:34.906726 env[1145]: time="2024-02-09T19:28:34.906654194Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2062 runtime=io.containerd.runc.v2\n" Feb 9 19:28:35.414381 kubelet[1496]: E0209 19:28:35.414314 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:35.443708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce-rootfs.mount: Deactivated successfully. Feb 9 19:28:35.768715 env[1145]: time="2024-02-09T19:28:35.768662937Z" level=info msg="CreateContainer within sandbox \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:28:35.788404 kubelet[1496]: I0209 19:28:35.788363 1496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-djtvt" podStartSLOduration=-9.223372015066494e+09 pod.CreationTimestamp="2024-02-09 19:28:14 +0000 UTC" firstStartedPulling="2024-02-09 19:28:21.431870224 +0000 UTC m=+20.581962990" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:34.794607442 +0000 UTC m=+33.944700235" watchObservedRunningTime="2024-02-09 19:28:35.788281797 +0000 UTC m=+34.938374590" Feb 9 19:28:35.792266 env[1145]: time="2024-02-09T19:28:35.792203725Z" level=info msg="CreateContainer within sandbox \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\"" Feb 9 19:28:35.793124 env[1145]: time="2024-02-09T19:28:35.793075805Z" level=info msg="StartContainer for \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\"" Feb 9 19:28:35.820983 systemd[1]: Started cri-containerd-08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5.scope. Feb 9 19:28:35.877404 env[1145]: time="2024-02-09T19:28:35.877289298Z" level=info msg="StartContainer for \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\" returns successfully" Feb 9 19:28:36.005520 kubelet[1496]: I0209 19:28:36.004113 1496 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:28:36.397214 kernel: Initializing XFRM netlink socket Feb 9 19:28:36.415517 kubelet[1496]: E0209 19:28:36.415465 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:37.416161 kubelet[1496]: E0209 19:28:37.416092 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:38.072150 systemd-networkd[1026]: cilium_host: Link UP Feb 9 19:28:38.091683 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:28:38.091850 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:28:38.092218 systemd-networkd[1026]: cilium_net: Link UP Feb 9 19:28:38.093704 systemd-networkd[1026]: cilium_net: Gained carrier Feb 9 19:28:38.094002 systemd-networkd[1026]: cilium_host: Gained carrier Feb 9 19:28:38.222258 systemd-networkd[1026]: cilium_vxlan: Link UP Feb 9 19:28:38.222271 systemd-networkd[1026]: cilium_vxlan: Gained carrier Feb 9 19:28:38.302958 systemd-networkd[1026]: cilium_net: Gained IPv6LL Feb 9 19:28:38.417159 kubelet[1496]: E0209 19:28:38.416997 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:38.488799 kernel: NET: Registered PF_ALG protocol family Feb 9 19:28:39.031052 systemd-networkd[1026]: cilium_host: Gained IPv6LL Feb 9 19:28:39.332096 systemd-networkd[1026]: lxc_health: Link UP Feb 9 19:28:39.336278 kubelet[1496]: I0209 19:28:39.336221 1496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mzhs4" podStartSLOduration=-9.223372011518612e+09 pod.CreationTimestamp="2024-02-09 19:28:14 +0000 UTC" firstStartedPulling="2024-02-09 19:28:21.40848712 +0000 UTC m=+20.558579902" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:36.789154257 +0000 UTC m=+35.939247053" watchObservedRunningTime="2024-02-09 19:28:39.336163253 +0000 UTC m=+38.486256041" Feb 9 19:28:39.340794 kubelet[1496]: I0209 19:28:39.336751 1496 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:28:39.346354 systemd[1]: Created slice kubepods-besteffort-pod195311ad_c10b_41a0_9055_f94b23072a57.slice. Feb 9 19:28:39.367648 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:28:39.368619 systemd-networkd[1026]: lxc_health: Gained carrier Feb 9 19:28:39.398903 kubelet[1496]: I0209 19:28:39.398848 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-299sk\" (UniqueName: \"kubernetes.io/projected/195311ad-c10b-41a0-9055-f94b23072a57-kube-api-access-299sk\") pod \"nginx-deployment-8ffc5cf85-kftdl\" (UID: \"195311ad-c10b-41a0-9055-f94b23072a57\") " pod="default/nginx-deployment-8ffc5cf85-kftdl" Feb 9 19:28:39.418004 kubelet[1496]: E0209 19:28:39.417956 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:39.607419 systemd-networkd[1026]: cilium_vxlan: Gained IPv6LL Feb 9 19:28:39.651735 env[1145]: time="2024-02-09T19:28:39.651665848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-kftdl,Uid:195311ad-c10b-41a0-9055-f94b23072a57,Namespace:default,Attempt:0,}" Feb 9 19:28:39.731746 systemd-networkd[1026]: lxc4613b0fb70ca: Link UP Feb 9 19:28:39.744861 kernel: eth0: renamed from tmp5e03e Feb 9 19:28:39.755962 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4613b0fb70ca: link becomes ready Feb 9 19:28:39.764327 systemd-networkd[1026]: lxc4613b0fb70ca: Gained carrier Feb 9 19:28:40.418405 kubelet[1496]: E0209 19:28:40.418319 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:40.759578 systemd-networkd[1026]: lxc_health: Gained IPv6LL Feb 9 19:28:41.388303 kubelet[1496]: E0209 19:28:41.388254 1496 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:41.420225 kubelet[1496]: E0209 19:28:41.420168 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:41.655508 systemd-networkd[1026]: lxc4613b0fb70ca: Gained IPv6LL Feb 9 19:28:42.421283 kubelet[1496]: E0209 19:28:42.421228 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:43.422410 kubelet[1496]: E0209 19:28:43.422335 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:44.422601 kubelet[1496]: E0209 19:28:44.422540 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:44.664642 env[1145]: time="2024-02-09T19:28:44.664478917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:44.665292 env[1145]: time="2024-02-09T19:28:44.664935675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:44.665292 env[1145]: time="2024-02-09T19:28:44.664979843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:44.665622 env[1145]: time="2024-02-09T19:28:44.665552828Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e03e9502a52200769c4eab63525aa9a8443246c7b96d3e53d4970ddea04846d pid=2577 runtime=io.containerd.runc.v2 Feb 9 19:28:44.688665 systemd[1]: Started cri-containerd-5e03e9502a52200769c4eab63525aa9a8443246c7b96d3e53d4970ddea04846d.scope. Feb 9 19:28:44.758164 env[1145]: time="2024-02-09T19:28:44.758110795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-kftdl,Uid:195311ad-c10b-41a0-9055-f94b23072a57,Namespace:default,Attempt:0,} returns sandbox id \"5e03e9502a52200769c4eab63525aa9a8443246c7b96d3e53d4970ddea04846d\"" Feb 9 19:28:44.760466 env[1145]: time="2024-02-09T19:28:44.760409030Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:28:45.423802 kubelet[1496]: E0209 19:28:45.423712 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:46.424691 kubelet[1496]: E0209 19:28:46.424634 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:47.425725 kubelet[1496]: E0209 19:28:47.425649 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:47.583931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901476673.mount: Deactivated successfully. Feb 9 19:28:48.426854 kubelet[1496]: E0209 19:28:48.426759 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:48.654027 env[1145]: time="2024-02-09T19:28:48.653949501Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:48.657246 env[1145]: time="2024-02-09T19:28:48.657187737Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:48.660194 env[1145]: time="2024-02-09T19:28:48.660133067Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:48.662845 env[1145]: time="2024-02-09T19:28:48.662794100Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:48.664156 env[1145]: time="2024-02-09T19:28:48.664101405Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:28:48.666577 env[1145]: time="2024-02-09T19:28:48.666507715Z" level=info msg="CreateContainer within sandbox \"5e03e9502a52200769c4eab63525aa9a8443246c7b96d3e53d4970ddea04846d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:28:48.683126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661831964.mount: Deactivated successfully. Feb 9 19:28:48.693741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2734538881.mount: Deactivated successfully. Feb 9 19:28:48.699019 env[1145]: time="2024-02-09T19:28:48.698933111Z" level=info msg="CreateContainer within sandbox \"5e03e9502a52200769c4eab63525aa9a8443246c7b96d3e53d4970ddea04846d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"364d905c4678435a2e9f555790e49e32a425c57d890096469787dfa10297d0f1\"" Feb 9 19:28:48.700119 env[1145]: time="2024-02-09T19:28:48.700061712Z" level=info msg="StartContainer for \"364d905c4678435a2e9f555790e49e32a425c57d890096469787dfa10297d0f1\"" Feb 9 19:28:48.725097 systemd[1]: Started cri-containerd-364d905c4678435a2e9f555790e49e32a425c57d890096469787dfa10297d0f1.scope. Feb 9 19:28:48.772941 env[1145]: time="2024-02-09T19:28:48.772874860Z" level=info msg="StartContainer for \"364d905c4678435a2e9f555790e49e32a425c57d890096469787dfa10297d0f1\" returns successfully" Feb 9 19:28:48.822154 kubelet[1496]: I0209 19:28:48.822096 1496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-kftdl" podStartSLOduration=-9.223372027032722e+09 pod.CreationTimestamp="2024-02-09 19:28:39 +0000 UTC" firstStartedPulling="2024-02-09 19:28:44.760017306 +0000 UTC m=+43.910110090" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:48.821603562 +0000 UTC m=+47.971696361" watchObservedRunningTime="2024-02-09 19:28:48.822053282 +0000 UTC m=+47.972146070" Feb 9 19:28:49.427272 kubelet[1496]: E0209 19:28:49.427202 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:50.428413 kubelet[1496]: E0209 19:28:50.428338 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:51.429417 kubelet[1496]: E0209 19:28:51.429343 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:52.430074 kubelet[1496]: E0209 19:28:52.430005 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:53.387698 kubelet[1496]: I0209 19:28:53.387659 1496 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:28:53.396609 systemd[1]: Created slice kubepods-besteffort-pod6fc49047_0745_467e_bb2d_527cdf2357c9.slice. Feb 9 19:28:53.430573 kubelet[1496]: E0209 19:28:53.430472 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:53.492256 kubelet[1496]: I0209 19:28:53.492192 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6fc49047-0745-467e-bb2d-527cdf2357c9-data\") pod \"nfs-server-provisioner-0\" (UID: \"6fc49047-0745-467e-bb2d-527cdf2357c9\") " pod="default/nfs-server-provisioner-0" Feb 9 19:28:53.492256 kubelet[1496]: I0209 19:28:53.492260 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd2hx\" (UniqueName: \"kubernetes.io/projected/6fc49047-0745-467e-bb2d-527cdf2357c9-kube-api-access-kd2hx\") pod \"nfs-server-provisioner-0\" (UID: \"6fc49047-0745-467e-bb2d-527cdf2357c9\") " pod="default/nfs-server-provisioner-0" Feb 9 19:28:53.702528 env[1145]: time="2024-02-09T19:28:53.702456953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6fc49047-0745-467e-bb2d-527cdf2357c9,Namespace:default,Attempt:0,}" Feb 9 19:28:53.754006 systemd-networkd[1026]: lxc85f20346090e: Link UP Feb 9 19:28:53.766938 kernel: eth0: renamed from tmp0f9c0 Feb 9 19:28:53.784263 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:28:53.784431 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc85f20346090e: link becomes ready Feb 9 19:28:53.784740 systemd-networkd[1026]: lxc85f20346090e: Gained carrier Feb 9 19:28:54.070774 env[1145]: time="2024-02-09T19:28:54.070534607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:54.070774 env[1145]: time="2024-02-09T19:28:54.070616504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:54.071047 env[1145]: time="2024-02-09T19:28:54.070635080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:54.071612 env[1145]: time="2024-02-09T19:28:54.071541233Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f9c0322051dc81eaa2f36727a56f0a2c5e042f1a4bbca1d3a329f85af7a9446 pid=2745 runtime=io.containerd.runc.v2 Feb 9 19:28:54.105056 systemd[1]: Started cri-containerd-0f9c0322051dc81eaa2f36727a56f0a2c5e042f1a4bbca1d3a329f85af7a9446.scope. Feb 9 19:28:54.165350 env[1145]: time="2024-02-09T19:28:54.165289665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6fc49047-0745-467e-bb2d-527cdf2357c9,Namespace:default,Attempt:0,} returns sandbox id \"0f9c0322051dc81eaa2f36727a56f0a2c5e042f1a4bbca1d3a329f85af7a9446\"" Feb 9 19:28:54.168168 env[1145]: time="2024-02-09T19:28:54.168108540Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:28:54.431110 kubelet[1496]: E0209 19:28:54.431038 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:54.611542 systemd[1]: run-containerd-runc-k8s.io-0f9c0322051dc81eaa2f36727a56f0a2c5e042f1a4bbca1d3a329f85af7a9446-runc.RPOEij.mount: Deactivated successfully. Feb 9 19:28:55.159080 systemd-networkd[1026]: lxc85f20346090e: Gained IPv6LL Feb 9 19:28:55.431779 kubelet[1496]: E0209 19:28:55.431712 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:56.432974 kubelet[1496]: E0209 19:28:56.432881 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:56.804810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276368733.mount: Deactivated successfully. Feb 9 19:28:57.433301 kubelet[1496]: E0209 19:28:57.433249 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:58.433447 kubelet[1496]: E0209 19:28:58.433362 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:59.196413 env[1145]: time="2024-02-09T19:28:59.196335117Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:59.199314 env[1145]: time="2024-02-09T19:28:59.199253546Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:59.201669 env[1145]: time="2024-02-09T19:28:59.201621558Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:59.204137 env[1145]: time="2024-02-09T19:28:59.204087212Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:59.205141 env[1145]: time="2024-02-09T19:28:59.205087491Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:28:59.208179 env[1145]: time="2024-02-09T19:28:59.208124344Z" level=info msg="CreateContainer within sandbox \"0f9c0322051dc81eaa2f36727a56f0a2c5e042f1a4bbca1d3a329f85af7a9446\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:28:59.221546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757055684.mount: Deactivated successfully. Feb 9 19:28:59.232165 env[1145]: time="2024-02-09T19:28:59.232100660Z" level=info msg="CreateContainer within sandbox \"0f9c0322051dc81eaa2f36727a56f0a2c5e042f1a4bbca1d3a329f85af7a9446\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ed736ab4149a35d311b04ccc61389ab3f65bfd05855abd89b80c4867d15b2beb\"" Feb 9 19:28:59.232992 env[1145]: time="2024-02-09T19:28:59.232936168Z" level=info msg="StartContainer for \"ed736ab4149a35d311b04ccc61389ab3f65bfd05855abd89b80c4867d15b2beb\"" Feb 9 19:28:59.261802 systemd[1]: Started cri-containerd-ed736ab4149a35d311b04ccc61389ab3f65bfd05855abd89b80c4867d15b2beb.scope. Feb 9 19:28:59.311029 env[1145]: time="2024-02-09T19:28:59.310971239Z" level=info msg="StartContainer for \"ed736ab4149a35d311b04ccc61389ab3f65bfd05855abd89b80c4867d15b2beb\" returns successfully" Feb 9 19:28:59.434180 kubelet[1496]: E0209 19:28:59.433959 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:59.884830 kubelet[1496]: I0209 19:28:59.884749 1496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372029970076e+09 pod.CreationTimestamp="2024-02-09 19:28:53 +0000 UTC" firstStartedPulling="2024-02-09 19:28:54.167338408 +0000 UTC m=+53.317431195" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:59.884396426 +0000 UTC m=+59.034489220" watchObservedRunningTime="2024-02-09 19:28:59.884699759 +0000 UTC m=+59.034792551" Feb 9 19:29:00.434173 kubelet[1496]: E0209 19:29:00.434127 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:01.388342 kubelet[1496]: E0209 19:29:01.388265 1496 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:01.434730 kubelet[1496]: E0209 19:29:01.434668 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:02.435728 kubelet[1496]: E0209 19:29:02.435667 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:03.436431 kubelet[1496]: E0209 19:29:03.436359 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:04.436811 kubelet[1496]: E0209 19:29:04.436663 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:05.437080 kubelet[1496]: E0209 19:29:05.437013 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:06.438213 kubelet[1496]: E0209 19:29:06.438144 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:07.439307 kubelet[1496]: E0209 19:29:07.439238 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:08.439989 kubelet[1496]: E0209 19:29:08.439911 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:09.397958 kubelet[1496]: I0209 19:29:09.397896 1496 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:29:09.405378 systemd[1]: Created slice kubepods-besteffort-pod57431c5c_0b8a_45a8_990e_8c2fa894a678.slice. Feb 9 19:29:09.440552 kubelet[1496]: E0209 19:29:09.440496 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:09.497535 kubelet[1496]: I0209 19:29:09.497479 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77be3950-c7aa-46b0-afce-ac6c07b88b9e\" (UniqueName: \"kubernetes.io/nfs/57431c5c-0b8a-45a8-990e-8c2fa894a678-pvc-77be3950-c7aa-46b0-afce-ac6c07b88b9e\") pod \"test-pod-1\" (UID: \"57431c5c-0b8a-45a8-990e-8c2fa894a678\") " pod="default/test-pod-1" Feb 9 19:29:09.497535 kubelet[1496]: I0209 19:29:09.497550 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26nt7\" (UniqueName: \"kubernetes.io/projected/57431c5c-0b8a-45a8-990e-8c2fa894a678-kube-api-access-26nt7\") pod \"test-pod-1\" (UID: \"57431c5c-0b8a-45a8-990e-8c2fa894a678\") " pod="default/test-pod-1" Feb 9 19:29:09.641824 kernel: FS-Cache: Loaded Feb 9 19:29:09.698675 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:29:09.698887 kernel: RPC: Registered udp transport module. Feb 9 19:29:09.698934 kernel: RPC: Registered tcp transport module. Feb 9 19:29:09.703503 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:29:09.768798 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:29:10.001161 kernel: NFS: Registering the id_resolver key type Feb 9 19:29:10.001344 kernel: Key type id_resolver registered Feb 9 19:29:10.001385 kernel: Key type id_legacy registered Feb 9 19:29:10.061671 nfsidmap[2895]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Feb 9 19:29:10.072134 nfsidmap[2896]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Feb 9 19:29:10.309649 env[1145]: time="2024-02-09T19:29:10.309579994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:57431c5c-0b8a-45a8-990e-8c2fa894a678,Namespace:default,Attempt:0,}" Feb 9 19:29:10.351387 systemd-networkd[1026]: lxca896985da0ee: Link UP Feb 9 19:29:10.361920 kernel: eth0: renamed from tmp550b8 Feb 9 19:29:10.383491 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:29:10.383639 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca896985da0ee: link becomes ready Feb 9 19:29:10.385431 systemd-networkd[1026]: lxca896985da0ee: Gained carrier Feb 9 19:29:10.441091 kubelet[1496]: E0209 19:29:10.440989 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:10.622960 env[1145]: time="2024-02-09T19:29:10.622744796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:29:10.623870 env[1145]: time="2024-02-09T19:29:10.622820470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:29:10.623870 env[1145]: time="2024-02-09T19:29:10.623155721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:29:10.623870 env[1145]: time="2024-02-09T19:29:10.623433022Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/550b8d6428d8090513b9fa9eccfc251f15d1b55a442ea44c1081b0917f1d73e5 pid=2924 runtime=io.containerd.runc.v2 Feb 9 19:29:10.658914 systemd[1]: Started cri-containerd-550b8d6428d8090513b9fa9eccfc251f15d1b55a442ea44c1081b0917f1d73e5.scope. Feb 9 19:29:10.723126 env[1145]: time="2024-02-09T19:29:10.723049667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:57431c5c-0b8a-45a8-990e-8c2fa894a678,Namespace:default,Attempt:0,} returns sandbox id \"550b8d6428d8090513b9fa9eccfc251f15d1b55a442ea44c1081b0917f1d73e5\"" Feb 9 19:29:10.725301 env[1145]: time="2024-02-09T19:29:10.725258682Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:29:10.965116 env[1145]: time="2024-02-09T19:29:10.965048357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:10.967517 env[1145]: time="2024-02-09T19:29:10.967466427Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:10.970138 env[1145]: time="2024-02-09T19:29:10.970094746Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:10.972667 env[1145]: time="2024-02-09T19:29:10.972615229Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:10.973615 env[1145]: time="2024-02-09T19:29:10.973567916Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:29:10.976504 env[1145]: time="2024-02-09T19:29:10.976449867Z" level=info msg="CreateContainer within sandbox \"550b8d6428d8090513b9fa9eccfc251f15d1b55a442ea44c1081b0917f1d73e5\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:29:10.994941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87357733.mount: Deactivated successfully. Feb 9 19:29:11.005644 env[1145]: time="2024-02-09T19:29:11.005572580Z" level=info msg="CreateContainer within sandbox \"550b8d6428d8090513b9fa9eccfc251f15d1b55a442ea44c1081b0917f1d73e5\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e37dec7fb81c8a74f9210848bdfed4a616292a9a44ef766e78134766ad6ce71a\"" Feb 9 19:29:11.006654 env[1145]: time="2024-02-09T19:29:11.006598239Z" level=info msg="StartContainer for \"e37dec7fb81c8a74f9210848bdfed4a616292a9a44ef766e78134766ad6ce71a\"" Feb 9 19:29:11.031378 systemd[1]: Started cri-containerd-e37dec7fb81c8a74f9210848bdfed4a616292a9a44ef766e78134766ad6ce71a.scope. Feb 9 19:29:11.073819 env[1145]: time="2024-02-09T19:29:11.073729163Z" level=info msg="StartContainer for \"e37dec7fb81c8a74f9210848bdfed4a616292a9a44ef766e78134766ad6ce71a\" returns successfully" Feb 9 19:29:11.442017 kubelet[1496]: E0209 19:29:11.441944 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:11.919495 kubelet[1496]: I0209 19:29:11.919268 1496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372018935555e+09 pod.CreationTimestamp="2024-02-09 19:28:54 +0000 UTC" firstStartedPulling="2024-02-09 19:29:10.72460523 +0000 UTC m=+69.874698005" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:29:11.919076697 +0000 UTC m=+71.069169488" watchObservedRunningTime="2024-02-09 19:29:11.919221379 +0000 UTC m=+71.069314212" Feb 9 19:29:12.119291 systemd-networkd[1026]: lxca896985da0ee: Gained IPv6LL Feb 9 19:29:12.443194 kubelet[1496]: E0209 19:29:12.443115 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:13.443443 kubelet[1496]: E0209 19:29:13.443304 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:14.444502 kubelet[1496]: E0209 19:29:14.444440 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:15.444797 kubelet[1496]: E0209 19:29:15.444723 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:16.445512 kubelet[1496]: E0209 19:29:16.445441 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:17.446680 kubelet[1496]: E0209 19:29:17.446608 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:18.447735 kubelet[1496]: E0209 19:29:18.447668 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:19.448719 kubelet[1496]: E0209 19:29:19.448641 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:19.769519 systemd[1]: run-containerd-runc-k8s.io-08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5-runc.p0nBEa.mount: Deactivated successfully. Feb 9 19:29:19.794220 env[1145]: time="2024-02-09T19:29:19.794129264Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:29:19.801268 env[1145]: time="2024-02-09T19:29:19.801222400Z" level=info msg="StopContainer for \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\" with timeout 1 (s)" Feb 9 19:29:19.801666 env[1145]: time="2024-02-09T19:29:19.801620012Z" level=info msg="Stop container \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\" with signal terminated" Feb 9 19:29:19.812843 systemd-networkd[1026]: lxc_health: Link DOWN Feb 9 19:29:19.812854 systemd-networkd[1026]: lxc_health: Lost carrier Feb 9 19:29:19.836515 systemd[1]: cri-containerd-08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5.scope: Deactivated successfully. Feb 9 19:29:19.836904 systemd[1]: cri-containerd-08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5.scope: Consumed 9.423s CPU time. Feb 9 19:29:19.865148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5-rootfs.mount: Deactivated successfully. Feb 9 19:29:20.448873 kubelet[1496]: E0209 19:29:20.448807 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:20.816314 env[1145]: time="2024-02-09T19:29:20.816119742Z" level=info msg="Kill container \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\"" Feb 9 19:29:21.387815 kubelet[1496]: E0209 19:29:21.387731 1496 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:21.450480 kubelet[1496]: E0209 19:29:21.450434 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:21.475342 env[1145]: time="2024-02-09T19:29:21.475272066Z" level=info msg="shim disconnected" id=08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5 Feb 9 19:29:21.475342 env[1145]: time="2024-02-09T19:29:21.475341869Z" level=warning msg="cleaning up after shim disconnected" id=08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5 namespace=k8s.io Feb 9 19:29:21.475646 env[1145]: time="2024-02-09T19:29:21.475356390Z" level=info msg="cleaning up dead shim" Feb 9 19:29:21.488078 env[1145]: time="2024-02-09T19:29:21.487950669Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3052 runtime=io.containerd.runc.v2\n" Feb 9 19:29:21.491992 env[1145]: time="2024-02-09T19:29:21.491926705Z" level=info msg="StopContainer for \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\" returns successfully" Feb 9 19:29:21.492844 env[1145]: time="2024-02-09T19:29:21.492801498Z" level=info msg="StopPodSandbox for \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\"" Feb 9 19:29:21.492983 env[1145]: time="2024-02-09T19:29:21.492880603Z" level=info msg="Container to stop \"750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:29:21.492983 env[1145]: time="2024-02-09T19:29:21.492907775Z" level=info msg="Container to stop \"b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:29:21.492983 env[1145]: time="2024-02-09T19:29:21.492927384Z" level=info msg="Container to stop \"fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:29:21.492983 env[1145]: time="2024-02-09T19:29:21.492945705Z" level=info msg="Container to stop \"ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:29:21.492983 env[1145]: time="2024-02-09T19:29:21.492963528Z" level=info msg="Container to stop \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:29:21.495713 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366-shm.mount: Deactivated successfully. Feb 9 19:29:21.505461 systemd[1]: cri-containerd-bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366.scope: Deactivated successfully. Feb 9 19:29:21.517905 kubelet[1496]: E0209 19:29:21.517625 1496 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:29:21.533518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366-rootfs.mount: Deactivated successfully. Feb 9 19:29:21.537160 env[1145]: time="2024-02-09T19:29:21.537100545Z" level=info msg="shim disconnected" id=bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366 Feb 9 19:29:21.537969 env[1145]: time="2024-02-09T19:29:21.537818977Z" level=warning msg="cleaning up after shim disconnected" id=bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366 namespace=k8s.io Feb 9 19:29:21.537969 env[1145]: time="2024-02-09T19:29:21.537854555Z" level=info msg="cleaning up dead shim" Feb 9 19:29:21.550158 env[1145]: time="2024-02-09T19:29:21.550093452Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3082 runtime=io.containerd.runc.v2\n" Feb 9 19:29:21.550573 env[1145]: time="2024-02-09T19:29:21.550527599Z" level=info msg="TearDown network for sandbox \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" successfully" Feb 9 19:29:21.550708 env[1145]: time="2024-02-09T19:29:21.550570464Z" level=info msg="StopPodSandbox for \"bf9f7522d02adeb021d1c305b48fcb84ff5df3e30e05a7ddc107b2f827e6d366\" returns successfully" Feb 9 19:29:21.678814 kubelet[1496]: I0209 19:29:21.677582 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-lib-modules\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.678814 kubelet[1496]: I0209 19:29:21.677650 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-config-path\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.678814 kubelet[1496]: I0209 19:29:21.677674 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:21.678814 kubelet[1496]: I0209 19:29:21.677705 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:21.678814 kubelet[1496]: I0209 19:29:21.677684 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-host-proc-sys-net\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679296 kubelet[1496]: I0209 19:29:21.677746 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-xtables-lock\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679296 kubelet[1496]: I0209 19:29:21.677806 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-bpf-maps\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679296 kubelet[1496]: I0209 19:29:21.677835 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-hostproc\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679296 kubelet[1496]: I0209 19:29:21.677864 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-cgroup\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679296 kubelet[1496]: I0209 19:29:21.677893 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-etc-cni-netd\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679296 kubelet[1496]: I0209 19:29:21.677929 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-hubble-tls\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679631 kubelet[1496]: I0209 19:29:21.677961 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cni-path\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679631 kubelet[1496]: W0209 19:29:21.677943 1496 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:29:21.679631 kubelet[1496]: I0209 19:29:21.678001 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r27t8\" (UniqueName: \"kubernetes.io/projected/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-kube-api-access-r27t8\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679631 kubelet[1496]: I0209 19:29:21.678037 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-host-proc-sys-kernel\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679631 kubelet[1496]: I0209 19:29:21.678068 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-run\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679631 kubelet[1496]: I0209 19:29:21.678103 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-clustermesh-secrets\") pod \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\" (UID: \"00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6\") " Feb 9 19:29:21.679983 kubelet[1496]: I0209 19:29:21.678154 1496 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-host-proc-sys-net\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.679983 kubelet[1496]: I0209 19:29:21.678174 1496 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-lib-modules\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.695506 kubelet[1496]: I0209 19:29:21.682696 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:29:21.695506 kubelet[1496]: I0209 19:29:21.682806 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:21.695506 kubelet[1496]: I0209 19:29:21.682840 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:21.695506 kubelet[1496]: I0209 19:29:21.682898 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-hostproc" (OuterVolumeSpecName: "hostproc") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:21.695506 kubelet[1496]: I0209 19:29:21.682922 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:21.684866 systemd[1]: var-lib-kubelet-pods-00b6b3b6\x2d5c9d\x2d4f53\x2d9213\x2dd3c72edfc1c6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:29:21.696136 kubelet[1496]: I0209 19:29:21.682949 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:21.696136 kubelet[1496]: I0209 19:29:21.687532 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:29:21.696136 kubelet[1496]: I0209 19:29:21.687610 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cni-path" (OuterVolumeSpecName: "cni-path") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:21.696136 kubelet[1496]: I0209 19:29:21.691908 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-kube-api-access-r27t8" (OuterVolumeSpecName: "kube-api-access-r27t8") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "kube-api-access-r27t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:29:21.696136 kubelet[1496]: I0209 19:29:21.691977 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:21.692780 systemd[1]: var-lib-kubelet-pods-00b6b3b6\x2d5c9d\x2d4f53\x2d9213\x2dd3c72edfc1c6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:29:21.696524 kubelet[1496]: I0209 19:29:21.692011 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:21.696524 kubelet[1496]: I0209 19:29:21.695423 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" (UID: "00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:29:21.698407 systemd[1]: var-lib-kubelet-pods-00b6b3b6\x2d5c9d\x2d4f53\x2d9213\x2dd3c72edfc1c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr27t8.mount: Deactivated successfully. Feb 9 19:29:21.778626 kubelet[1496]: I0209 19:29:21.778584 1496 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cni-path\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.778626 kubelet[1496]: I0209 19:29:21.778629 1496 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-r27t8\" (UniqueName: \"kubernetes.io/projected/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-kube-api-access-r27t8\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.778944 kubelet[1496]: I0209 19:29:21.778649 1496 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-hubble-tls\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.778944 kubelet[1496]: I0209 19:29:21.778665 1496 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-clustermesh-secrets\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.778944 kubelet[1496]: I0209 19:29:21.778681 1496 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-host-proc-sys-kernel\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.778944 kubelet[1496]: I0209 19:29:21.778695 1496 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-run\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.778944 kubelet[1496]: I0209 19:29:21.778710 1496 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-xtables-lock\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.778944 kubelet[1496]: I0209 19:29:21.778724 1496 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-bpf-maps\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.778944 kubelet[1496]: I0209 19:29:21.778739 1496 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-hostproc\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.778944 kubelet[1496]: I0209 19:29:21.778752 1496 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-cgroup\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.779234 kubelet[1496]: I0209 19:29:21.778783 1496 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-etc-cni-netd\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.779234 kubelet[1496]: I0209 19:29:21.778811 1496 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6-cilium-config-path\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:21.934023 kubelet[1496]: I0209 19:29:21.931439 1496 scope.go:115] "RemoveContainer" containerID="08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5" Feb 9 19:29:21.939137 systemd[1]: Removed slice kubepods-burstable-pod00b6b3b6_5c9d_4f53_9213_d3c72edfc1c6.slice. Feb 9 19:29:21.939319 systemd[1]: kubepods-burstable-pod00b6b3b6_5c9d_4f53_9213_d3c72edfc1c6.slice: Consumed 9.582s CPU time. Feb 9 19:29:21.941623 env[1145]: time="2024-02-09T19:29:21.941570151Z" level=info msg="RemoveContainer for \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\"" Feb 9 19:29:21.947956 env[1145]: time="2024-02-09T19:29:21.947903675Z" level=info msg="RemoveContainer for \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\" returns successfully" Feb 9 19:29:21.948233 kubelet[1496]: I0209 19:29:21.948191 1496 scope.go:115] "RemoveContainer" containerID="ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce" Feb 9 19:29:21.949574 env[1145]: time="2024-02-09T19:29:21.949531960Z" level=info msg="RemoveContainer for \"ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce\"" Feb 9 19:29:21.953641 env[1145]: time="2024-02-09T19:29:21.953597424Z" level=info msg="RemoveContainer for \"ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce\" returns successfully" Feb 9 19:29:21.955351 kubelet[1496]: I0209 19:29:21.955323 1496 scope.go:115] "RemoveContainer" containerID="fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8" Feb 9 19:29:21.956797 env[1145]: time="2024-02-09T19:29:21.956738692Z" level=info msg="RemoveContainer for \"fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8\"" Feb 9 19:29:21.961466 env[1145]: time="2024-02-09T19:29:21.961417831Z" level=info msg="RemoveContainer for \"fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8\" returns successfully" Feb 9 19:29:21.961900 kubelet[1496]: I0209 19:29:21.961870 1496 scope.go:115] "RemoveContainer" containerID="b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19" Feb 9 19:29:21.963335 env[1145]: time="2024-02-09T19:29:21.963293361Z" level=info msg="RemoveContainer for \"b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19\"" Feb 9 19:29:21.967556 env[1145]: time="2024-02-09T19:29:21.967499168Z" level=info msg="RemoveContainer for \"b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19\" returns successfully" Feb 9 19:29:21.967807 kubelet[1496]: I0209 19:29:21.967755 1496 scope.go:115] "RemoveContainer" containerID="750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740" Feb 9 19:29:21.969076 env[1145]: time="2024-02-09T19:29:21.969035746Z" level=info msg="RemoveContainer for \"750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740\"" Feb 9 19:29:21.972731 env[1145]: time="2024-02-09T19:29:21.972678629Z" level=info msg="RemoveContainer for \"750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740\" returns successfully" Feb 9 19:29:21.972995 kubelet[1496]: I0209 19:29:21.972884 1496 scope.go:115] "RemoveContainer" containerID="08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5" Feb 9 19:29:21.973369 env[1145]: time="2024-02-09T19:29:21.973260779Z" level=error msg="ContainerStatus for \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\": not found" Feb 9 19:29:21.973599 kubelet[1496]: E0209 19:29:21.973565 1496 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\": not found" containerID="08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5" Feb 9 19:29:21.973599 kubelet[1496]: I0209 19:29:21.973612 1496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5} err="failed to get container status \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"08ab382b736a29d07d1191479520b2e7100d632d2dc73a56ee0ba27c7cfab7f5\": not found" Feb 9 19:29:21.973799 kubelet[1496]: I0209 19:29:21.973629 1496 scope.go:115] "RemoveContainer" containerID="ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce" Feb 9 19:29:21.973992 env[1145]: time="2024-02-09T19:29:21.973903076Z" level=error msg="ContainerStatus for \"ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce\": not found" Feb 9 19:29:21.974113 kubelet[1496]: E0209 19:29:21.974090 1496 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce\": not found" containerID="ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce" Feb 9 19:29:21.974228 kubelet[1496]: I0209 19:29:21.974135 1496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce} err="failed to get container status \"ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac0b8990674c5b1ea25846988daae2f2a67b4814c4eec92945b6160dac11cdce\": not found" Feb 9 19:29:21.974228 kubelet[1496]: I0209 19:29:21.974157 1496 scope.go:115] "RemoveContainer" containerID="fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8" Feb 9 19:29:21.974456 env[1145]: time="2024-02-09T19:29:21.974369919Z" level=error msg="ContainerStatus for \"fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8\": not found" Feb 9 19:29:21.974617 kubelet[1496]: E0209 19:29:21.974564 1496 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8\": not found" containerID="fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8" Feb 9 19:29:21.974617 kubelet[1496]: I0209 19:29:21.974599 1496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8} err="failed to get container status \"fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd5ccec52583f5a3b9bda7e3a53ca7be050308fcd151f982b352b2c5e3e208d8\": not found" Feb 9 19:29:21.974755 kubelet[1496]: I0209 19:29:21.974644 1496 scope.go:115] "RemoveContainer" containerID="b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19" Feb 9 19:29:21.974976 env[1145]: time="2024-02-09T19:29:21.974890614Z" level=error msg="ContainerStatus for \"b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19\": not found" Feb 9 19:29:21.975094 kubelet[1496]: E0209 19:29:21.975071 1496 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19\": not found" containerID="b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19" Feb 9 19:29:21.975197 kubelet[1496]: I0209 19:29:21.975117 1496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19} err="failed to get container status \"b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3795e5fb83ef0dd771c4a2966dec3a5acd99ca99c753e34593eb1c93fc10f19\": not found" Feb 9 19:29:21.975197 kubelet[1496]: I0209 19:29:21.975134 1496 scope.go:115] "RemoveContainer" containerID="750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740" Feb 9 19:29:21.975436 env[1145]: time="2024-02-09T19:29:21.975342213Z" level=error msg="ContainerStatus for \"750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740\": not found" Feb 9 19:29:21.975575 kubelet[1496]: E0209 19:29:21.975532 1496 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740\": not found" containerID="750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740" Feb 9 19:29:21.975575 kubelet[1496]: I0209 19:29:21.975565 1496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740} err="failed to get container status \"750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740\": rpc error: code = NotFound desc = an error occurred when try to find container \"750de941c557d907043eaf78fab2c253352698695b25ed3f57b161b804956740\": not found" Feb 9 19:29:22.451610 kubelet[1496]: E0209 19:29:22.451544 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:23.256424 kubelet[1496]: I0209 19:29:23.256374 1496 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:29:23.256424 kubelet[1496]: E0209 19:29:23.256447 1496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" containerName="mount-cgroup" Feb 9 19:29:23.256736 kubelet[1496]: E0209 19:29:23.256465 1496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" containerName="apply-sysctl-overwrites" Feb 9 19:29:23.256736 kubelet[1496]: E0209 19:29:23.256476 1496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" containerName="cilium-agent" Feb 9 19:29:23.256736 kubelet[1496]: E0209 19:29:23.256486 1496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" containerName="mount-bpf-fs" Feb 9 19:29:23.256736 kubelet[1496]: E0209 19:29:23.256496 1496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" containerName="clean-cilium-state" Feb 9 19:29:23.256736 kubelet[1496]: I0209 19:29:23.256530 1496 memory_manager.go:346] "RemoveStaleState removing state" podUID="00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6" containerName="cilium-agent" Feb 9 19:29:23.263525 systemd[1]: Created slice kubepods-besteffort-pod458aed33_fb3d_4d68_92b8_1b0c66d2bef4.slice. Feb 9 19:29:23.286898 kubelet[1496]: I0209 19:29:23.286845 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/458aed33-fb3d-4d68-92b8-1b0c66d2bef4-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-gsm5d\" (UID: \"458aed33-fb3d-4d68-92b8-1b0c66d2bef4\") " pod="kube-system/cilium-operator-f59cbd8c6-gsm5d" Feb 9 19:29:23.287123 kubelet[1496]: I0209 19:29:23.286905 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nf27\" (UniqueName: \"kubernetes.io/projected/458aed33-fb3d-4d68-92b8-1b0c66d2bef4-kube-api-access-9nf27\") pod \"cilium-operator-f59cbd8c6-gsm5d\" (UID: \"458aed33-fb3d-4d68-92b8-1b0c66d2bef4\") " pod="kube-system/cilium-operator-f59cbd8c6-gsm5d" Feb 9 19:29:23.315087 kubelet[1496]: I0209 19:29:23.315039 1496 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:29:23.322661 systemd[1]: Created slice kubepods-burstable-poda9c0d0e6_5382_459a_beb1_2e2b49f31e81.slice. Feb 9 19:29:23.387600 kubelet[1496]: I0209 19:29:23.387555 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-clustermesh-secrets\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.387956 kubelet[1496]: I0209 19:29:23.387932 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-config-path\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388137 kubelet[1496]: I0209 19:29:23.388115 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-etc-cni-netd\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388258 kubelet[1496]: I0209 19:29:23.388162 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-lib-modules\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388258 kubelet[1496]: I0209 19:29:23.388199 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-bpf-maps\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388258 kubelet[1496]: I0209 19:29:23.388241 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-cgroup\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388431 kubelet[1496]: I0209 19:29:23.388282 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cni-path\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388431 kubelet[1496]: I0209 19:29:23.388323 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-host-proc-sys-kernel\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388431 kubelet[1496]: I0209 19:29:23.388360 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-hubble-tls\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388431 kubelet[1496]: I0209 19:29:23.388398 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvmt6\" (UniqueName: \"kubernetes.io/projected/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-kube-api-access-tvmt6\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388646 kubelet[1496]: I0209 19:29:23.388457 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-hostproc\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388646 kubelet[1496]: I0209 19:29:23.388495 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-xtables-lock\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388646 kubelet[1496]: I0209 19:29:23.388531 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-ipsec-secrets\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388646 kubelet[1496]: I0209 19:29:23.388570 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-host-proc-sys-net\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.388646 kubelet[1496]: I0209 19:29:23.388626 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-run\") pod \"cilium-q4nrz\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " pod="kube-system/cilium-q4nrz" Feb 9 19:29:23.452521 kubelet[1496]: E0209 19:29:23.452458 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:23.568538 env[1145]: time="2024-02-09T19:29:23.568375447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-gsm5d,Uid:458aed33-fb3d-4d68-92b8-1b0c66d2bef4,Namespace:kube-system,Attempt:0,}" Feb 9 19:29:23.589870 env[1145]: time="2024-02-09T19:29:23.589569449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:29:23.589870 env[1145]: time="2024-02-09T19:29:23.589627087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:29:23.589870 env[1145]: time="2024-02-09T19:29:23.589646977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:29:23.590365 env[1145]: time="2024-02-09T19:29:23.590302483Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18be565e81a6398a1344e5a100e5f1f431fcc23342c9a89e2089f38a2658240c pid=3112 runtime=io.containerd.runc.v2 Feb 9 19:29:23.608357 systemd[1]: Started cri-containerd-18be565e81a6398a1344e5a100e5f1f431fcc23342c9a89e2089f38a2658240c.scope. Feb 9 19:29:23.631686 env[1145]: time="2024-02-09T19:29:23.631627894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4nrz,Uid:a9c0d0e6-5382-459a-beb1-2e2b49f31e81,Namespace:kube-system,Attempt:0,}" Feb 9 19:29:23.654369 env[1145]: time="2024-02-09T19:29:23.654282983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:29:23.654670 env[1145]: time="2024-02-09T19:29:23.654630423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:29:23.654841 env[1145]: time="2024-02-09T19:29:23.654806666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:29:23.655214 env[1145]: time="2024-02-09T19:29:23.655163878Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b pid=3142 runtime=io.containerd.runc.v2 Feb 9 19:29:23.681898 kubelet[1496]: I0209 19:29:23.681465 1496 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6 path="/var/lib/kubelet/pods/00b6b3b6-5c9d-4f53-9213-d3c72edfc1c6/volumes" Feb 9 19:29:23.681511 systemd[1]: Started cri-containerd-2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b.scope. Feb 9 19:29:23.700790 env[1145]: time="2024-02-09T19:29:23.700715505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-gsm5d,Uid:458aed33-fb3d-4d68-92b8-1b0c66d2bef4,Namespace:kube-system,Attempt:0,} returns sandbox id \"18be565e81a6398a1344e5a100e5f1f431fcc23342c9a89e2089f38a2658240c\"" Feb 9 19:29:23.712473 kubelet[1496]: E0209 19:29:23.712274 1496 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Feb 9 19:29:23.713106 env[1145]: time="2024-02-09T19:29:23.713047271Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:29:23.733496 env[1145]: time="2024-02-09T19:29:23.733426112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4nrz,Uid:a9c0d0e6-5382-459a-beb1-2e2b49f31e81,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b\"" Feb 9 19:29:23.737323 env[1145]: time="2024-02-09T19:29:23.737275961Z" level=info msg="CreateContainer within sandbox \"2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:29:23.755062 env[1145]: time="2024-02-09T19:29:23.755009127Z" level=info msg="CreateContainer within sandbox \"2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64\"" Feb 9 19:29:23.757066 env[1145]: time="2024-02-09T19:29:23.757029652Z" level=info msg="StartContainer for \"cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64\"" Feb 9 19:29:23.780408 systemd[1]: Started cri-containerd-cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64.scope. Feb 9 19:29:23.799392 systemd[1]: cri-containerd-cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64.scope: Deactivated successfully. Feb 9 19:29:23.815021 env[1145]: time="2024-02-09T19:29:23.814944145Z" level=info msg="shim disconnected" id=cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64 Feb 9 19:29:23.815021 env[1145]: time="2024-02-09T19:29:23.815018387Z" level=warning msg="cleaning up after shim disconnected" id=cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64 namespace=k8s.io Feb 9 19:29:23.815396 env[1145]: time="2024-02-09T19:29:23.815033013Z" level=info msg="cleaning up dead shim" Feb 9 19:29:23.828633 env[1145]: time="2024-02-09T19:29:23.827242419Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3206 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:29:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:29:23.828633 env[1145]: time="2024-02-09T19:29:23.827588678Z" level=error msg="copy shim log" error="read /proc/self/fd/65: file already closed" Feb 9 19:29:23.829068 env[1145]: time="2024-02-09T19:29:23.829008859Z" level=error msg="Failed to pipe stderr of container \"cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64\"" error="reading from a closed fifo" Feb 9 19:29:23.830914 env[1145]: time="2024-02-09T19:29:23.830852065Z" level=error msg="Failed to pipe stdout of container \"cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64\"" error="reading from a closed fifo" Feb 9 19:29:23.833292 env[1145]: time="2024-02-09T19:29:23.833211777Z" level=error msg="StartContainer for \"cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:29:23.833588 kubelet[1496]: E0209 19:29:23.833516 1496 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64" Feb 9 19:29:23.833756 kubelet[1496]: E0209 19:29:23.833724 1496 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:29:23.833756 kubelet[1496]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:29:23.833756 kubelet[1496]: rm /hostbin/cilium-mount Feb 9 19:29:23.833756 kubelet[1496]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tvmt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-q4nrz_kube-system(a9c0d0e6-5382-459a-beb1-2e2b49f31e81): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:29:23.834104 kubelet[1496]: E0209 19:29:23.833808 1496 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-q4nrz" podUID=a9c0d0e6-5382-459a-beb1-2e2b49f31e81 Feb 9 19:29:23.943784 env[1145]: time="2024-02-09T19:29:23.943707601Z" level=info msg="CreateContainer within sandbox \"2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 9 19:29:23.958278 env[1145]: time="2024-02-09T19:29:23.958198725Z" level=info msg="CreateContainer within sandbox \"2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e\"" Feb 9 19:29:23.959455 env[1145]: time="2024-02-09T19:29:23.959403185Z" level=info msg="StartContainer for \"1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e\"" Feb 9 19:29:23.991506 systemd[1]: Started cri-containerd-1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e.scope. Feb 9 19:29:24.010323 systemd[1]: cri-containerd-1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e.scope: Deactivated successfully. Feb 9 19:29:24.019263 env[1145]: time="2024-02-09T19:29:24.019186764Z" level=info msg="shim disconnected" id=1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e Feb 9 19:29:24.019263 env[1145]: time="2024-02-09T19:29:24.019261615Z" level=warning msg="cleaning up after shim disconnected" id=1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e namespace=k8s.io Feb 9 19:29:24.019612 env[1145]: time="2024-02-09T19:29:24.019276993Z" level=info msg="cleaning up dead shim" Feb 9 19:29:24.031924 env[1145]: time="2024-02-09T19:29:24.031849894Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3244 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:29:24Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:29:24.032299 env[1145]: time="2024-02-09T19:29:24.032219690Z" level=error msg="copy shim log" error="read /proc/self/fd/69: file already closed" Feb 9 19:29:24.033683 env[1145]: time="2024-02-09T19:29:24.033609872Z" level=error msg="Failed to pipe stderr of container \"1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e\"" error="reading from a closed fifo" Feb 9 19:29:24.033832 env[1145]: time="2024-02-09T19:29:24.033689114Z" level=error msg="Failed to pipe stdout of container \"1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e\"" error="reading from a closed fifo" Feb 9 19:29:24.036048 env[1145]: time="2024-02-09T19:29:24.035979425Z" level=error msg="StartContainer for \"1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:29:24.036395 kubelet[1496]: E0209 19:29:24.036353 1496 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e" Feb 9 19:29:24.036600 kubelet[1496]: E0209 19:29:24.036555 1496 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:29:24.036600 kubelet[1496]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:29:24.036600 kubelet[1496]: rm /hostbin/cilium-mount Feb 9 19:29:24.036600 kubelet[1496]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tvmt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-q4nrz_kube-system(a9c0d0e6-5382-459a-beb1-2e2b49f31e81): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:29:24.037015 kubelet[1496]: E0209 19:29:24.036643 1496 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-q4nrz" podUID=a9c0d0e6-5382-459a-beb1-2e2b49f31e81 Feb 9 19:29:24.453214 kubelet[1496]: E0209 19:29:24.453143 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:24.853373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785601582.mount: Deactivated successfully. Feb 9 19:29:24.944632 kubelet[1496]: I0209 19:29:24.944575 1496 scope.go:115] "RemoveContainer" containerID="cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64" Feb 9 19:29:24.945373 env[1145]: time="2024-02-09T19:29:24.945319822Z" level=info msg="StopPodSandbox for \"2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b\"" Feb 9 19:29:24.949981 env[1145]: time="2024-02-09T19:29:24.945408394Z" level=info msg="Container to stop \"cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:29:24.949981 env[1145]: time="2024-02-09T19:29:24.945431323Z" level=info msg="Container to stop \"1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:29:24.948326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b-shm.mount: Deactivated successfully. Feb 9 19:29:24.951834 env[1145]: time="2024-02-09T19:29:24.951788805Z" level=info msg="RemoveContainer for \"cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64\"" Feb 9 19:29:24.964991 env[1145]: time="2024-02-09T19:29:24.964931643Z" level=info msg="RemoveContainer for \"cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64\" returns successfully" Feb 9 19:29:24.967096 systemd[1]: cri-containerd-2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b.scope: Deactivated successfully. Feb 9 19:29:25.033982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b-rootfs.mount: Deactivated successfully. Feb 9 19:29:25.047470 env[1145]: time="2024-02-09T19:29:25.047398468Z" level=info msg="shim disconnected" id=2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b Feb 9 19:29:25.047470 env[1145]: time="2024-02-09T19:29:25.047471661Z" level=warning msg="cleaning up after shim disconnected" id=2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b namespace=k8s.io Feb 9 19:29:25.047470 env[1145]: time="2024-02-09T19:29:25.047488965Z" level=info msg="cleaning up dead shim" Feb 9 19:29:25.062584 env[1145]: time="2024-02-09T19:29:25.062529022Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3274 runtime=io.containerd.runc.v2\n" Feb 9 19:29:25.063235 env[1145]: time="2024-02-09T19:29:25.063192248Z" level=info msg="TearDown network for sandbox \"2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b\" successfully" Feb 9 19:29:25.063400 env[1145]: time="2024-02-09T19:29:25.063372077Z" level=info msg="StopPodSandbox for \"2e626d519d271a9597f33b01d635e2c85eb09277505ad904bbd53ddfaa713d8b\" returns successfully" Feb 9 19:29:25.107090 kubelet[1496]: I0209 19:29:25.106932 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-etc-cni-netd\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.107090 kubelet[1496]: I0209 19:29:25.107013 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-lib-modules\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.108199 kubelet[1496]: I0209 19:29:25.108141 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:25.110801 kubelet[1496]: I0209 19:29:25.108260 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-hubble-tls\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.110801 kubelet[1496]: I0209 19:29:25.108419 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvmt6\" (UniqueName: \"kubernetes.io/projected/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-kube-api-access-tvmt6\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.110801 kubelet[1496]: I0209 19:29:25.108456 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-host-proc-sys-net\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.110801 kubelet[1496]: I0209 19:29:25.108490 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-bpf-maps\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.110801 kubelet[1496]: I0209 19:29:25.108524 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-cgroup\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.110801 kubelet[1496]: I0209 19:29:25.108558 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-ipsec-secrets\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.111229 kubelet[1496]: I0209 19:29:25.108595 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-clustermesh-secrets\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.111229 kubelet[1496]: I0209 19:29:25.108623 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-run\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.111229 kubelet[1496]: I0209 19:29:25.108661 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cni-path\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.111229 kubelet[1496]: I0209 19:29:25.108694 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-host-proc-sys-kernel\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.111229 kubelet[1496]: I0209 19:29:25.108732 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-config-path\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.111229 kubelet[1496]: I0209 19:29:25.108777 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-hostproc\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.111572 kubelet[1496]: I0209 19:29:25.108813 1496 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-xtables-lock\") pod \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\" (UID: \"a9c0d0e6-5382-459a-beb1-2e2b49f31e81\") " Feb 9 19:29:25.111572 kubelet[1496]: I0209 19:29:25.108869 1496 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-etc-cni-netd\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.111572 kubelet[1496]: I0209 19:29:25.108901 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:25.111572 kubelet[1496]: I0209 19:29:25.109071 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:25.111572 kubelet[1496]: I0209 19:29:25.109351 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:25.111986 kubelet[1496]: I0209 19:29:25.109388 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:25.111986 kubelet[1496]: I0209 19:29:25.109413 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:25.111986 kubelet[1496]: I0209 19:29:25.109711 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:25.111986 kubelet[1496]: I0209 19:29:25.109751 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:25.111986 kubelet[1496]: I0209 19:29:25.109816 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cni-path" (OuterVolumeSpecName: "cni-path") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:25.112276 kubelet[1496]: W0209 19:29:25.110009 1496 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a9c0d0e6-5382-459a-beb1-2e2b49f31e81/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:29:25.112642 kubelet[1496]: I0209 19:29:25.112590 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-hostproc" (OuterVolumeSpecName: "hostproc") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:29:25.113453 kubelet[1496]: I0209 19:29:25.113415 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:29:25.125656 kubelet[1496]: I0209 19:29:25.125594 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-kube-api-access-tvmt6" (OuterVolumeSpecName: "kube-api-access-tvmt6") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "kube-api-access-tvmt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:29:25.125926 kubelet[1496]: I0209 19:29:25.125871 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:29:25.126892 kubelet[1496]: I0209 19:29:25.126852 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:29:25.128643 kubelet[1496]: I0209 19:29:25.128600 1496 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a9c0d0e6-5382-459a-beb1-2e2b49f31e81" (UID: "a9c0d0e6-5382-459a-beb1-2e2b49f31e81"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:29:25.209982 kubelet[1496]: I0209 19:29:25.209931 1496 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-lib-modules\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.209982 kubelet[1496]: I0209 19:29:25.209984 1496 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-hubble-tls\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210269 kubelet[1496]: I0209 19:29:25.210003 1496 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-tvmt6\" (UniqueName: \"kubernetes.io/projected/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-kube-api-access-tvmt6\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210269 kubelet[1496]: I0209 19:29:25.210020 1496 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-host-proc-sys-net\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210269 kubelet[1496]: I0209 19:29:25.210036 1496 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-bpf-maps\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210269 kubelet[1496]: I0209 19:29:25.210050 1496 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-cgroup\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210269 kubelet[1496]: I0209 19:29:25.210065 1496 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-ipsec-secrets\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210269 kubelet[1496]: I0209 19:29:25.210080 1496 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-clustermesh-secrets\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210269 kubelet[1496]: I0209 19:29:25.210095 1496 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-run\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210269 kubelet[1496]: I0209 19:29:25.210109 1496 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cni-path\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210612 kubelet[1496]: I0209 19:29:25.210135 1496 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-host-proc-sys-kernel\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210612 kubelet[1496]: I0209 19:29:25.210151 1496 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-cilium-config-path\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210612 kubelet[1496]: I0209 19:29:25.210166 1496 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-hostproc\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.210612 kubelet[1496]: I0209 19:29:25.210182 1496 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9c0d0e6-5382-459a-beb1-2e2b49f31e81-xtables-lock\") on node \"10.128.0.112\" DevicePath \"\"" Feb 9 19:29:25.433817 systemd[1]: var-lib-kubelet-pods-a9c0d0e6\x2d5382\x2d459a\x2dbeb1\x2d2e2b49f31e81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtvmt6.mount: Deactivated successfully. Feb 9 19:29:25.433980 systemd[1]: var-lib-kubelet-pods-a9c0d0e6\x2d5382\x2d459a\x2dbeb1\x2d2e2b49f31e81-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:29:25.434177 systemd[1]: var-lib-kubelet-pods-a9c0d0e6\x2d5382\x2d459a\x2dbeb1\x2d2e2b49f31e81-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:29:25.434362 systemd[1]: var-lib-kubelet-pods-a9c0d0e6\x2d5382\x2d459a\x2dbeb1\x2d2e2b49f31e81-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:29:25.453413 kubelet[1496]: E0209 19:29:25.453363 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:25.635844 kubelet[1496]: I0209 19:29:25.634212 1496 setters.go:548] "Node became not ready" node="10.128.0.112" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:29:25.634134148 +0000 UTC m=+84.784226922 LastTransitionTime:2024-02-09 19:29:25.634134148 +0000 UTC m=+84.784226922 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:29:25.686398 systemd[1]: Removed slice kubepods-burstable-poda9c0d0e6_5382_459a_beb1_2e2b49f31e81.slice. Feb 9 19:29:25.801965 env[1145]: time="2024-02-09T19:29:25.801885586Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:25.804288 env[1145]: time="2024-02-09T19:29:25.804232069Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:25.806482 env[1145]: time="2024-02-09T19:29:25.806433375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:25.807125 env[1145]: time="2024-02-09T19:29:25.807067118Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:29:25.809808 env[1145]: time="2024-02-09T19:29:25.809741107Z" level=info msg="CreateContainer within sandbox \"18be565e81a6398a1344e5a100e5f1f431fcc23342c9a89e2089f38a2658240c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:29:25.829908 env[1145]: time="2024-02-09T19:29:25.829830456Z" level=info msg="CreateContainer within sandbox \"18be565e81a6398a1344e5a100e5f1f431fcc23342c9a89e2089f38a2658240c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e504fd5704a7793222c61cd4e2efa81be9f7e5ce56acef08275fa0bc175a3321\"" Feb 9 19:29:25.830723 env[1145]: time="2024-02-09T19:29:25.830641579Z" level=info msg="StartContainer for \"e504fd5704a7793222c61cd4e2efa81be9f7e5ce56acef08275fa0bc175a3321\"" Feb 9 19:29:25.868212 systemd[1]: Started cri-containerd-e504fd5704a7793222c61cd4e2efa81be9f7e5ce56acef08275fa0bc175a3321.scope. Feb 9 19:29:25.909856 env[1145]: time="2024-02-09T19:29:25.909747624Z" level=info msg="StartContainer for \"e504fd5704a7793222c61cd4e2efa81be9f7e5ce56acef08275fa0bc175a3321\" returns successfully" Feb 9 19:29:25.948141 kubelet[1496]: I0209 19:29:25.947993 1496 scope.go:115] "RemoveContainer" containerID="1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e" Feb 9 19:29:25.954280 env[1145]: time="2024-02-09T19:29:25.954228078Z" level=info msg="RemoveContainer for \"1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e\"" Feb 9 19:29:25.961831 env[1145]: time="2024-02-09T19:29:25.961755133Z" level=info msg="RemoveContainer for \"1e7aa9b0693678326c8c8194480f0bf6bc26aec231cb6fc15a224e0d21ea116e\" returns successfully" Feb 9 19:29:26.026470 kubelet[1496]: I0209 19:29:26.026428 1496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-gsm5d" podStartSLOduration=-9.223372033828402e+09 pod.CreationTimestamp="2024-02-09 19:29:23 +0000 UTC" firstStartedPulling="2024-02-09 19:29:23.702299922 +0000 UTC m=+82.852392702" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:29:26.007945129 +0000 UTC m=+85.158037921" watchObservedRunningTime="2024-02-09 19:29:26.026374508 +0000 UTC m=+85.176467283" Feb 9 19:29:26.027891 kubelet[1496]: I0209 19:29:26.027842 1496 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:29:26.028131 kubelet[1496]: E0209 19:29:26.028111 1496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9c0d0e6-5382-459a-beb1-2e2b49f31e81" containerName="mount-cgroup" Feb 9 19:29:26.028318 kubelet[1496]: E0209 19:29:26.028298 1496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9c0d0e6-5382-459a-beb1-2e2b49f31e81" containerName="mount-cgroup" Feb 9 19:29:26.028544 kubelet[1496]: I0209 19:29:26.028525 1496 memory_manager.go:346] "RemoveStaleState removing state" podUID="a9c0d0e6-5382-459a-beb1-2e2b49f31e81" containerName="mount-cgroup" Feb 9 19:29:26.028696 kubelet[1496]: I0209 19:29:26.028681 1496 memory_manager.go:346] "RemoveStaleState removing state" podUID="a9c0d0e6-5382-459a-beb1-2e2b49f31e81" containerName="mount-cgroup" Feb 9 19:29:26.039878 systemd[1]: Created slice kubepods-burstable-poda3e2dc1b_5efd_4689_ba29_dc736555eba4.slice. Feb 9 19:29:26.113549 kubelet[1496]: I0209 19:29:26.113488 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3e2dc1b-5efd-4689-ba29-dc736555eba4-clustermesh-secrets\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.113813 kubelet[1496]: I0209 19:29:26.113578 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3e2dc1b-5efd-4689-ba29-dc736555eba4-hubble-tls\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.113813 kubelet[1496]: I0209 19:29:26.113630 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm8cw\" (UniqueName: \"kubernetes.io/projected/a3e2dc1b-5efd-4689-ba29-dc736555eba4-kube-api-access-jm8cw\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.113813 kubelet[1496]: I0209 19:29:26.113698 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3e2dc1b-5efd-4689-ba29-dc736555eba4-hostproc\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.113813 kubelet[1496]: I0209 19:29:26.113736 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3e2dc1b-5efd-4689-ba29-dc736555eba4-cni-path\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.113813 kubelet[1496]: I0209 19:29:26.113812 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3e2dc1b-5efd-4689-ba29-dc736555eba4-bpf-maps\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.114129 kubelet[1496]: I0209 19:29:26.113871 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3e2dc1b-5efd-4689-ba29-dc736555eba4-etc-cni-netd\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.114129 kubelet[1496]: I0209 19:29:26.113962 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a3e2dc1b-5efd-4689-ba29-dc736555eba4-cilium-ipsec-secrets\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.114129 kubelet[1496]: I0209 19:29:26.114003 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3e2dc1b-5efd-4689-ba29-dc736555eba4-host-proc-sys-net\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.114129 kubelet[1496]: I0209 19:29:26.114062 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3e2dc1b-5efd-4689-ba29-dc736555eba4-host-proc-sys-kernel\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.114129 kubelet[1496]: I0209 19:29:26.114123 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3e2dc1b-5efd-4689-ba29-dc736555eba4-cilium-run\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.114442 kubelet[1496]: I0209 19:29:26.114160 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3e2dc1b-5efd-4689-ba29-dc736555eba4-cilium-cgroup\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.114442 kubelet[1496]: I0209 19:29:26.114220 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3e2dc1b-5efd-4689-ba29-dc736555eba4-lib-modules\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.114442 kubelet[1496]: I0209 19:29:26.114254 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3e2dc1b-5efd-4689-ba29-dc736555eba4-xtables-lock\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.114442 kubelet[1496]: I0209 19:29:26.114311 1496 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3e2dc1b-5efd-4689-ba29-dc736555eba4-cilium-config-path\") pod \"cilium-lpvhc\" (UID: \"a3e2dc1b-5efd-4689-ba29-dc736555eba4\") " pod="kube-system/cilium-lpvhc" Feb 9 19:29:26.439479 systemd[1]: run-containerd-runc-k8s.io-e504fd5704a7793222c61cd4e2efa81be9f7e5ce56acef08275fa0bc175a3321-runc.isohA1.mount: Deactivated successfully. Feb 9 19:29:26.454329 kubelet[1496]: E0209 19:29:26.454253 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:26.518538 kubelet[1496]: E0209 19:29:26.518467 1496 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:29:26.649218 env[1145]: time="2024-02-09T19:29:26.649162955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lpvhc,Uid:a3e2dc1b-5efd-4689-ba29-dc736555eba4,Namespace:kube-system,Attempt:0,}" Feb 9 19:29:26.675003 env[1145]: time="2024-02-09T19:29:26.674901134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:29:26.675226 env[1145]: time="2024-02-09T19:29:26.674956044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:29:26.675226 env[1145]: time="2024-02-09T19:29:26.675007108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:29:26.675552 env[1145]: time="2024-02-09T19:29:26.675476571Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf pid=3345 runtime=io.containerd.runc.v2 Feb 9 19:29:26.703598 systemd[1]: Started cri-containerd-235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf.scope. Feb 9 19:29:26.741937 env[1145]: time="2024-02-09T19:29:26.741882546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lpvhc,Uid:a3e2dc1b-5efd-4689-ba29-dc736555eba4,Namespace:kube-system,Attempt:0,} returns sandbox id \"235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf\"" Feb 9 19:29:26.746288 env[1145]: time="2024-02-09T19:29:26.746238392Z" level=info msg="CreateContainer within sandbox \"235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:29:26.764217 env[1145]: time="2024-02-09T19:29:26.764166072Z" level=info msg="CreateContainer within sandbox \"235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2642def13cae6ee3b7e3769fb8306dfc396ef75de5de0c8246432ea45a52a83c\"" Feb 9 19:29:26.765263 env[1145]: time="2024-02-09T19:29:26.765198164Z" level=info msg="StartContainer for \"2642def13cae6ee3b7e3769fb8306dfc396ef75de5de0c8246432ea45a52a83c\"" Feb 9 19:29:26.788247 systemd[1]: Started cri-containerd-2642def13cae6ee3b7e3769fb8306dfc396ef75de5de0c8246432ea45a52a83c.scope. Feb 9 19:29:26.823859 env[1145]: time="2024-02-09T19:29:26.823793396Z" level=info msg="StartContainer for \"2642def13cae6ee3b7e3769fb8306dfc396ef75de5de0c8246432ea45a52a83c\" returns successfully" Feb 9 19:29:26.835572 systemd[1]: cri-containerd-2642def13cae6ee3b7e3769fb8306dfc396ef75de5de0c8246432ea45a52a83c.scope: Deactivated successfully. Feb 9 19:29:26.921007 kubelet[1496]: W0209 19:29:26.920932 1496 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9c0d0e6_5382_459a_beb1_2e2b49f31e81.slice/cri-containerd-cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64.scope WatchSource:0}: container "cda430d905058255dccd8edd1568bb9f51c1b4429812a77c010b086f58192f64" in namespace "k8s.io": not found Feb 9 19:29:27.032955 env[1145]: time="2024-02-09T19:29:27.032026852Z" level=info msg="shim disconnected" id=2642def13cae6ee3b7e3769fb8306dfc396ef75de5de0c8246432ea45a52a83c Feb 9 19:29:27.032955 env[1145]: time="2024-02-09T19:29:27.032101252Z" level=warning msg="cleaning up after shim disconnected" id=2642def13cae6ee3b7e3769fb8306dfc396ef75de5de0c8246432ea45a52a83c namespace=k8s.io Feb 9 19:29:27.032955 env[1145]: time="2024-02-09T19:29:27.032116647Z" level=info msg="cleaning up dead shim" Feb 9 19:29:27.044639 env[1145]: time="2024-02-09T19:29:27.044581175Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3429 runtime=io.containerd.runc.v2\n" Feb 9 19:29:27.434228 systemd[1]: run-containerd-runc-k8s.io-235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf-runc.rz96OZ.mount: Deactivated successfully. Feb 9 19:29:27.454640 kubelet[1496]: E0209 19:29:27.454575 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:27.680550 kubelet[1496]: I0209 19:29:27.680487 1496 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a9c0d0e6-5382-459a-beb1-2e2b49f31e81 path="/var/lib/kubelet/pods/a9c0d0e6-5382-459a-beb1-2e2b49f31e81/volumes" Feb 9 19:29:27.968584 env[1145]: time="2024-02-09T19:29:27.968518187Z" level=info msg="CreateContainer within sandbox \"235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:29:27.989167 env[1145]: time="2024-02-09T19:29:27.989093768Z" level=info msg="CreateContainer within sandbox \"235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f579225e25be063640bd7e44e4c0f2005fefcf01983ac311ca230f8278e3e7e3\"" Feb 9 19:29:27.989929 env[1145]: time="2024-02-09T19:29:27.989878674Z" level=info msg="StartContainer for \"f579225e25be063640bd7e44e4c0f2005fefcf01983ac311ca230f8278e3e7e3\"" Feb 9 19:29:28.031437 systemd[1]: Started cri-containerd-f579225e25be063640bd7e44e4c0f2005fefcf01983ac311ca230f8278e3e7e3.scope. Feb 9 19:29:28.081444 env[1145]: time="2024-02-09T19:29:28.081358582Z" level=info msg="StartContainer for \"f579225e25be063640bd7e44e4c0f2005fefcf01983ac311ca230f8278e3e7e3\" returns successfully" Feb 9 19:29:28.086044 systemd[1]: cri-containerd-f579225e25be063640bd7e44e4c0f2005fefcf01983ac311ca230f8278e3e7e3.scope: Deactivated successfully. Feb 9 19:29:28.117325 env[1145]: time="2024-02-09T19:29:28.117263852Z" level=info msg="shim disconnected" id=f579225e25be063640bd7e44e4c0f2005fefcf01983ac311ca230f8278e3e7e3 Feb 9 19:29:28.117833 env[1145]: time="2024-02-09T19:29:28.117787613Z" level=warning msg="cleaning up after shim disconnected" id=f579225e25be063640bd7e44e4c0f2005fefcf01983ac311ca230f8278e3e7e3 namespace=k8s.io Feb 9 19:29:28.117833 env[1145]: time="2024-02-09T19:29:28.117819490Z" level=info msg="cleaning up dead shim" Feb 9 19:29:28.129664 env[1145]: time="2024-02-09T19:29:28.129579444Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3493 runtime=io.containerd.runc.v2\n" Feb 9 19:29:28.434333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f579225e25be063640bd7e44e4c0f2005fefcf01983ac311ca230f8278e3e7e3-rootfs.mount: Deactivated successfully. Feb 9 19:29:28.455689 kubelet[1496]: E0209 19:29:28.455625 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:28.972834 env[1145]: time="2024-02-09T19:29:28.972755940Z" level=info msg="CreateContainer within sandbox \"235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:29:28.998013 env[1145]: time="2024-02-09T19:29:28.997939771Z" level=info msg="CreateContainer within sandbox \"235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e92a78b88362e6056defb5b3fb6b1c9b7aef85e4590af4a2d1407efb4df79892\"" Feb 9 19:29:29.000358 env[1145]: time="2024-02-09T19:29:29.000305788Z" level=info msg="StartContainer for \"e92a78b88362e6056defb5b3fb6b1c9b7aef85e4590af4a2d1407efb4df79892\"" Feb 9 19:29:29.042752 systemd[1]: Started cri-containerd-e92a78b88362e6056defb5b3fb6b1c9b7aef85e4590af4a2d1407efb4df79892.scope. Feb 9 19:29:29.085985 env[1145]: time="2024-02-09T19:29:29.085914503Z" level=info msg="StartContainer for \"e92a78b88362e6056defb5b3fb6b1c9b7aef85e4590af4a2d1407efb4df79892\" returns successfully" Feb 9 19:29:29.088824 systemd[1]: cri-containerd-e92a78b88362e6056defb5b3fb6b1c9b7aef85e4590af4a2d1407efb4df79892.scope: Deactivated successfully. Feb 9 19:29:29.121233 env[1145]: time="2024-02-09T19:29:29.121140772Z" level=info msg="shim disconnected" id=e92a78b88362e6056defb5b3fb6b1c9b7aef85e4590af4a2d1407efb4df79892 Feb 9 19:29:29.121536 env[1145]: time="2024-02-09T19:29:29.121247605Z" level=warning msg="cleaning up after shim disconnected" id=e92a78b88362e6056defb5b3fb6b1c9b7aef85e4590af4a2d1407efb4df79892 namespace=k8s.io Feb 9 19:29:29.121536 env[1145]: time="2024-02-09T19:29:29.121264163Z" level=info msg="cleaning up dead shim" Feb 9 19:29:29.133079 env[1145]: time="2024-02-09T19:29:29.133007603Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3548 runtime=io.containerd.runc.v2\n" Feb 9 19:29:29.434428 systemd[1]: run-containerd-runc-k8s.io-e92a78b88362e6056defb5b3fb6b1c9b7aef85e4590af4a2d1407efb4df79892-runc.febGcu.mount: Deactivated successfully. Feb 9 19:29:29.434586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e92a78b88362e6056defb5b3fb6b1c9b7aef85e4590af4a2d1407efb4df79892-rootfs.mount: Deactivated successfully. Feb 9 19:29:29.456402 kubelet[1496]: E0209 19:29:29.456332 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:29.978272 env[1145]: time="2024-02-09T19:29:29.978209985Z" level=info msg="CreateContainer within sandbox \"235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:29:30.006731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870452469.mount: Deactivated successfully. Feb 9 19:29:30.011882 env[1145]: time="2024-02-09T19:29:30.011813838Z" level=info msg="CreateContainer within sandbox \"235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b0f250d03ef3b83da2808484994464fda7c2253b0640930d7540438e01db68e3\"" Feb 9 19:29:30.014345 env[1145]: time="2024-02-09T19:29:30.014299670Z" level=info msg="StartContainer for \"b0f250d03ef3b83da2808484994464fda7c2253b0640930d7540438e01db68e3\"" Feb 9 19:29:30.038905 systemd[1]: Started cri-containerd-b0f250d03ef3b83da2808484994464fda7c2253b0640930d7540438e01db68e3.scope. Feb 9 19:29:30.086670 systemd[1]: cri-containerd-b0f250d03ef3b83da2808484994464fda7c2253b0640930d7540438e01db68e3.scope: Deactivated successfully. Feb 9 19:29:30.089312 env[1145]: time="2024-02-09T19:29:30.089250628Z" level=info msg="StartContainer for \"b0f250d03ef3b83da2808484994464fda7c2253b0640930d7540438e01db68e3\" returns successfully" Feb 9 19:29:30.119885 env[1145]: time="2024-02-09T19:29:30.119819004Z" level=info msg="shim disconnected" id=b0f250d03ef3b83da2808484994464fda7c2253b0640930d7540438e01db68e3 Feb 9 19:29:30.120380 env[1145]: time="2024-02-09T19:29:30.120342313Z" level=warning msg="cleaning up after shim disconnected" id=b0f250d03ef3b83da2808484994464fda7c2253b0640930d7540438e01db68e3 namespace=k8s.io Feb 9 19:29:30.120532 env[1145]: time="2024-02-09T19:29:30.120375125Z" level=info msg="cleaning up dead shim" Feb 9 19:29:30.132878 env[1145]: time="2024-02-09T19:29:30.132808626Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3605 runtime=io.containerd.runc.v2\n" Feb 9 19:29:30.434510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0f250d03ef3b83da2808484994464fda7c2253b0640930d7540438e01db68e3-rootfs.mount: Deactivated successfully. Feb 9 19:29:30.456925 kubelet[1496]: E0209 19:29:30.456854 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:30.983101 env[1145]: time="2024-02-09T19:29:30.983033432Z" level=info msg="CreateContainer within sandbox \"235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:29:31.004529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649293112.mount: Deactivated successfully. Feb 9 19:29:31.014811 env[1145]: time="2024-02-09T19:29:31.014706078Z" level=info msg="CreateContainer within sandbox \"235025f7b3ea38b23e323a8329a8ec7864699dd54a1566dc168b875ef6c825cf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fa7a3d5c8e8e4e0b2132a70365ad98f23bb9fd139ebae84f3ddb352f96bdba20\"" Feb 9 19:29:31.015693 env[1145]: time="2024-02-09T19:29:31.015649237Z" level=info msg="StartContainer for \"fa7a3d5c8e8e4e0b2132a70365ad98f23bb9fd139ebae84f3ddb352f96bdba20\"" Feb 9 19:29:31.041884 systemd[1]: Started cri-containerd-fa7a3d5c8e8e4e0b2132a70365ad98f23bb9fd139ebae84f3ddb352f96bdba20.scope. Feb 9 19:29:31.096054 env[1145]: time="2024-02-09T19:29:31.095973330Z" level=info msg="StartContainer for \"fa7a3d5c8e8e4e0b2132a70365ad98f23bb9fd139ebae84f3ddb352f96bdba20\" returns successfully" Feb 9 19:29:31.457610 kubelet[1496]: E0209 19:29:31.457509 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:31.523803 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:29:32.003362 kubelet[1496]: I0209 19:29:32.003316 1496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lpvhc" podStartSLOduration=7.003269946 pod.CreationTimestamp="2024-02-09 19:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:29:32.003145737 +0000 UTC m=+91.153238528" watchObservedRunningTime="2024-02-09 19:29:32.003269946 +0000 UTC m=+91.153362729" Feb 9 19:29:32.457803 kubelet[1496]: E0209 19:29:32.457735 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:32.824664 systemd[1]: run-containerd-runc-k8s.io-fa7a3d5c8e8e4e0b2132a70365ad98f23bb9fd139ebae84f3ddb352f96bdba20-runc.7noWz6.mount: Deactivated successfully. Feb 9 19:29:33.459022 kubelet[1496]: E0209 19:29:33.458967 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:34.455968 systemd-networkd[1026]: lxc_health: Link UP Feb 9 19:29:34.462919 kubelet[1496]: E0209 19:29:34.462873 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:34.480813 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:29:34.481492 systemd-networkd[1026]: lxc_health: Gained carrier Feb 9 19:29:35.087525 systemd[1]: run-containerd-runc-k8s.io-fa7a3d5c8e8e4e0b2132a70365ad98f23bb9fd139ebae84f3ddb352f96bdba20-runc.mwJl6v.mount: Deactivated successfully. Feb 9 19:29:35.464325 kubelet[1496]: E0209 19:29:35.464235 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:35.991706 systemd-networkd[1026]: lxc_health: Gained IPv6LL Feb 9 19:29:36.465463 kubelet[1496]: E0209 19:29:36.465407 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:37.368686 systemd[1]: run-containerd-runc-k8s.io-fa7a3d5c8e8e4e0b2132a70365ad98f23bb9fd139ebae84f3ddb352f96bdba20-runc.JCCdDV.mount: Deactivated successfully. Feb 9 19:29:37.457689 kubelet[1496]: E0209 19:29:37.457541 1496 upgradeaware.go:440] Error proxying data from backend to client: read tcp 127.0.0.1:43392->127.0.0.1:34685: read: connection reset by peer Feb 9 19:29:37.467016 kubelet[1496]: E0209 19:29:37.466912 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:38.467215 kubelet[1496]: E0209 19:29:38.467160 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:39.468106 kubelet[1496]: E0209 19:29:39.468040 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:39.618686 systemd[1]: run-containerd-runc-k8s.io-fa7a3d5c8e8e4e0b2132a70365ad98f23bb9fd139ebae84f3ddb352f96bdba20-runc.epoQFZ.mount: Deactivated successfully. Feb 9 19:29:40.468950 kubelet[1496]: E0209 19:29:40.468891 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:41.387818 kubelet[1496]: E0209 19:29:41.387753 1496 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:41.469719 kubelet[1496]: E0209 19:29:41.469651 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:42.470486 kubelet[1496]: E0209 19:29:42.470427 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:43.471134 kubelet[1496]: E0209 19:29:43.471058 1496 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"