Feb 9 19:31:48.081344 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:31:48.081386 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:31:48.081404 kernel: BIOS-provided physical RAM map: Feb 9 19:31:48.081418 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 9 19:31:48.081430 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 9 19:31:48.081443 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 9 19:31:48.081464 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 9 19:31:48.081478 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 9 19:31:48.081492 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Feb 9 19:31:48.081506 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 9 19:31:48.081520 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 9 19:31:48.081533 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 9 19:31:48.081547 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 9 19:31:48.081562 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 9 19:31:48.081583 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 9 19:31:48.081597 kernel: NX (Execute Disable) protection: active Feb 9 19:31:48.081611 kernel: efi: EFI v2.70 by EDK II Feb 9 19:31:48.081627 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbe379198 RNG=0xbfb73018 TPMEventLog=0xbe2bd018 Feb 9 19:31:48.081641 kernel: random: crng init done Feb 9 19:31:48.081655 kernel: SMBIOS 2.4 present. Feb 9 19:31:48.081671 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023 Feb 9 19:31:48.081686 kernel: Hypervisor detected: KVM Feb 9 19:31:48.081705 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:31:48.081720 kernel: kvm-clock: cpu 0, msr 58faa001, primary cpu clock Feb 9 19:31:48.081735 kernel: kvm-clock: using sched offset of 12700475736 cycles Feb 9 19:31:48.081751 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:31:48.081766 kernel: tsc: Detected 2299.998 MHz processor Feb 9 19:31:48.081781 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:31:48.081797 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:31:48.081812 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 9 19:31:48.081827 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:31:48.081851 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 9 19:31:48.081870 kernel: Using GB pages for direct mapping Feb 9 19:31:48.081885 kernel: Secure boot disabled Feb 9 19:31:48.081901 kernel: ACPI: Early table checksum verification disabled Feb 9 19:31:48.081916 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 9 19:31:48.081931 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 9 19:31:48.081946 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 9 19:31:48.081962 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 9 19:31:48.081978 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 9 19:31:48.082003 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Feb 9 19:31:48.082020 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 9 19:31:48.082036 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 9 19:31:48.082052 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 9 19:31:48.082069 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 9 19:31:48.082086 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 9 19:31:48.082106 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 9 19:31:48.082122 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 9 19:31:48.082139 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 9 19:31:48.082154 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 9 19:31:48.082170 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 9 19:31:48.082214 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 9 19:31:48.082231 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 9 19:31:48.082246 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 9 19:31:48.082261 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 9 19:31:48.082283 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:31:48.082299 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:31:48.082314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 9 19:31:48.082330 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 9 19:31:48.082345 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 9 19:31:48.082362 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 9 19:31:48.082379 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 9 19:31:48.082395 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Feb 9 19:31:48.082411 kernel: Zone ranges: Feb 9 19:31:48.082431 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:31:48.082448 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:31:48.082465 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 9 19:31:48.082481 kernel: Movable zone start for each node Feb 9 19:31:48.082497 kernel: Early memory node ranges Feb 9 19:31:48.082514 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 9 19:31:48.082530 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 9 19:31:48.082547 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Feb 9 19:31:48.082561 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 9 19:31:48.082581 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 9 19:31:48.082597 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 9 19:31:48.082614 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:31:48.082630 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 9 19:31:48.082647 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 9 19:31:48.082663 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 9 19:31:48.082680 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 9 19:31:48.082696 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 19:31:48.082713 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:31:48.082732 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:31:48.082749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:31:48.082765 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:31:48.082782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:31:48.082798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:31:48.082815 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:31:48.082837 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:31:48.082854 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 9 19:31:48.082870 kernel: Booting paravirtualized kernel on KVM Feb 9 19:31:48.082890 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:31:48.082906 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:31:48.082923 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:31:48.082940 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:31:48.082956 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:31:48.082972 kernel: kvm-guest: PV spinlocks enabled Feb 9 19:31:48.082988 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:31:48.083005 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1931256 Feb 9 19:31:48.083021 kernel: Policy zone: Normal Feb 9 19:31:48.083042 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:31:48.083059 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:31:48.083075 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:31:48.083092 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:31:48.083108 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:31:48.083125 kernel: Memory: 7536508K/7860584K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 323816K reserved, 0K cma-reserved) Feb 9 19:31:48.083142 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:31:48.083158 kernel: Kernel/User page tables isolation: enabled Feb 9 19:31:48.083207 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:31:48.083224 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:31:48.083241 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:31:48.083258 kernel: rcu: RCU event tracing is enabled. Feb 9 19:31:48.083274 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:31:48.083291 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:31:48.083308 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:31:48.083324 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:31:48.083340 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:31:48.083361 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 19:31:48.083390 kernel: Console: colour dummy device 80x25 Feb 9 19:31:48.083407 kernel: printk: console [ttyS0] enabled Feb 9 19:31:48.083427 kernel: ACPI: Core revision 20210730 Feb 9 19:31:48.083444 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:31:48.083462 kernel: x2apic enabled Feb 9 19:31:48.083479 kernel: Switched APIC routing to physical x2apic. Feb 9 19:31:48.083496 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 9 19:31:48.083514 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 9 19:31:48.083532 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 9 19:31:48.083553 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 9 19:31:48.083572 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 9 19:31:48.083589 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:31:48.083606 kernel: Spectre V2 : Mitigation: IBRS Feb 9 19:31:48.083623 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:31:48.083641 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:31:48.083662 kernel: RETBleed: Mitigation: IBRS Feb 9 19:31:48.083679 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:31:48.083697 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Feb 9 19:31:48.083714 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:31:48.083732 kernel: MDS: Mitigation: Clear CPU buffers Feb 9 19:31:48.083750 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:31:48.083768 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:31:48.083785 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:31:48.083802 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:31:48.083823 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:31:48.083846 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 19:31:48.083864 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:31:48.083881 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:31:48.083898 kernel: LSM: Security Framework initializing Feb 9 19:31:48.083916 kernel: SELinux: Initializing. Feb 9 19:31:48.083933 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:31:48.083951 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:31:48.083968 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 9 19:31:48.083989 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 9 19:31:48.084006 kernel: signal: max sigframe size: 1776 Feb 9 19:31:48.084024 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:31:48.084041 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:31:48.084058 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:31:48.084075 kernel: x86: Booting SMP configuration: Feb 9 19:31:48.084093 kernel: .... node #0, CPUs: #1 Feb 9 19:31:48.084110 kernel: kvm-clock: cpu 1, msr 58faa041, secondary cpu clock Feb 9 19:31:48.084128 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 9 19:31:48.084150 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:31:48.084167 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:31:48.084201 kernel: smpboot: Max logical packages: 1 Feb 9 19:31:48.084219 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 9 19:31:48.084236 kernel: devtmpfs: initialized Feb 9 19:31:48.084253 kernel: x86/mm: Memory block size: 128MB Feb 9 19:31:48.084270 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 9 19:31:48.084287 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:31:48.084303 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:31:48.084325 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:31:48.084341 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:31:48.084357 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:31:48.084375 kernel: audit: type=2000 audit(1707507107.026:1): state=initialized audit_enabled=0 res=1 Feb 9 19:31:48.084391 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:31:48.084409 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:31:48.084426 kernel: cpuidle: using governor menu Feb 9 19:31:48.084443 kernel: ACPI: bus type PCI registered Feb 9 19:31:48.084460 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:31:48.084480 kernel: dca service started, version 1.12.1 Feb 9 19:31:48.084498 kernel: PCI: Using configuration type 1 for base access Feb 9 19:31:48.084515 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:31:48.084532 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:31:48.084549 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:31:48.084566 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:31:48.084583 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:31:48.084600 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:31:48.084617 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:31:48.084638 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:31:48.084655 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:31:48.084672 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:31:48.084689 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 9 19:31:48.084706 kernel: ACPI: Interpreter enabled Feb 9 19:31:48.084723 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:31:48.084740 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:31:48.084758 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:31:48.084775 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 9 19:31:48.084796 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:31:48.085016 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:31:48.085192 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 19:31:48.085213 kernel: PCI host bridge to bus 0000:00 Feb 9 19:31:48.085371 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:31:48.085515 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:31:48.085695 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:31:48.085836 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 9 19:31:48.085973 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:31:48.086147 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:31:48.086344 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 9 19:31:48.086522 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:31:48.086683 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 19:31:48.086860 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 9 19:31:48.087024 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 9 19:31:48.087202 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 9 19:31:48.087386 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:31:48.087550 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 9 19:31:48.087712 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 9 19:31:48.087884 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:31:48.088049 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 19:31:48.088231 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 9 19:31:48.088254 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:31:48.088273 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:31:48.088291 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:31:48.088309 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:31:48.088327 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:31:48.088356 kernel: iommu: Default domain type: Translated Feb 9 19:31:48.088374 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:31:48.088392 kernel: vgaarb: loaded Feb 9 19:31:48.088409 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:31:48.088427 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:31:48.088444 kernel: PTP clock support registered Feb 9 19:31:48.088462 kernel: Registered efivars operations Feb 9 19:31:48.088480 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:31:48.088497 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:31:48.088519 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 9 19:31:48.088536 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 9 19:31:48.088554 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 9 19:31:48.088572 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 9 19:31:48.088589 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:31:48.088607 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:31:48.088625 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:31:48.088643 kernel: pnp: PnP ACPI init Feb 9 19:31:48.088661 kernel: pnp: PnP ACPI: found 7 devices Feb 9 19:31:48.088682 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:31:48.088699 kernel: NET: Registered PF_INET protocol family Feb 9 19:31:48.088717 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:31:48.088735 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:31:48.088753 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:31:48.088770 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:31:48.088788 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:31:48.088805 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:31:48.088823 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:31:48.088843 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:31:48.088861 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:31:48.088879 kernel: NET: Registered PF_XDP protocol family Feb 9 19:31:48.089033 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:31:48.089193 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:31:48.089349 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:31:48.089491 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 9 19:31:48.089674 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:31:48.089705 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:31:48.089724 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:31:48.089742 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Feb 9 19:31:48.089760 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:31:48.089778 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 9 19:31:48.089796 kernel: clocksource: Switched to clocksource tsc Feb 9 19:31:48.089813 kernel: Initialise system trusted keyrings Feb 9 19:31:48.089830 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:31:48.089852 kernel: Key type asymmetric registered Feb 9 19:31:48.089869 kernel: Asymmetric key parser 'x509' registered Feb 9 19:31:48.089886 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:31:48.089904 kernel: io scheduler mq-deadline registered Feb 9 19:31:48.089922 kernel: io scheduler kyber registered Feb 9 19:31:48.089939 kernel: io scheduler bfq registered Feb 9 19:31:48.089957 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:31:48.089976 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:31:48.090142 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 9 19:31:48.090169 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 19:31:48.090362 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 9 19:31:48.090385 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:31:48.090546 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 9 19:31:48.090569 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:31:48.090587 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:31:48.090605 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:31:48.090622 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 9 19:31:48.090639 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 9 19:31:48.090812 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 9 19:31:48.090836 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:31:48.090855 kernel: i8042: Warning: Keylock active Feb 9 19:31:48.090872 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:31:48.090890 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:31:48.091048 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 9 19:31:48.093042 kernel: rtc_cmos 00:00: registered as rtc0 Feb 9 19:31:48.093252 kernel: rtc_cmos 00:00: setting system clock to 2024-02-09T19:31:47 UTC (1707507107) Feb 9 19:31:48.093412 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 9 19:31:48.093435 kernel: intel_pstate: CPU model not supported Feb 9 19:31:48.093454 kernel: pstore: Registered efi as persistent store backend Feb 9 19:31:48.093472 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:31:48.093489 kernel: Segment Routing with IPv6 Feb 9 19:31:48.093507 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:31:48.093524 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:31:48.093542 kernel: Key type dns_resolver registered Feb 9 19:31:48.093564 kernel: IPI shorthand broadcast: enabled Feb 9 19:31:48.093582 kernel: sched_clock: Marking stable (711682288, 123312188)->(856013419, -21018943) Feb 9 19:31:48.093599 kernel: registered taskstats version 1 Feb 9 19:31:48.093616 kernel: Loading compiled-in X.509 certificates Feb 9 19:31:48.093633 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:31:48.093652 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:31:48.093669 kernel: Key type .fscrypt registered Feb 9 19:31:48.093686 kernel: Key type fscrypt-provisioning registered Feb 9 19:31:48.093704 kernel: pstore: Using crash dump compression: deflate Feb 9 19:31:48.093724 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:31:48.093742 kernel: ima: No architecture policies found Feb 9 19:31:48.093759 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:31:48.093776 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:31:48.093793 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:31:48.093811 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:31:48.093828 kernel: Run /init as init process Feb 9 19:31:48.093845 kernel: with arguments: Feb 9 19:31:48.093866 kernel: /init Feb 9 19:31:48.093883 kernel: with environment: Feb 9 19:31:48.093900 kernel: HOME=/ Feb 9 19:31:48.093916 kernel: TERM=linux Feb 9 19:31:48.093934 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:31:48.093954 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:31:48.093976 systemd[1]: Detected virtualization kvm. Feb 9 19:31:48.093995 systemd[1]: Detected architecture x86-64. Feb 9 19:31:48.094016 systemd[1]: Running in initrd. Feb 9 19:31:48.094034 systemd[1]: No hostname configured, using default hostname. Feb 9 19:31:48.094052 systemd[1]: Hostname set to . Feb 9 19:31:48.094070 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:31:48.094088 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:31:48.094105 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:31:48.094122 systemd[1]: Reached target cryptsetup.target. Feb 9 19:31:48.094140 systemd[1]: Reached target paths.target. Feb 9 19:31:48.094161 systemd[1]: Reached target slices.target. Feb 9 19:31:48.094190 systemd[1]: Reached target swap.target. Feb 9 19:31:48.101371 systemd[1]: Reached target timers.target. Feb 9 19:31:48.101394 systemd[1]: Listening on iscsid.socket. Feb 9 19:31:48.101414 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:31:48.101433 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:31:48.101452 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:31:48.101471 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:31:48.101495 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:31:48.101514 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:31:48.101533 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:31:48.101552 systemd[1]: Reached target sockets.target. Feb 9 19:31:48.101571 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:31:48.101590 systemd[1]: Finished network-cleanup.service. Feb 9 19:31:48.101609 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:31:48.101627 systemd[1]: Starting systemd-journald.service... Feb 9 19:31:48.101645 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:31:48.101668 systemd[1]: Starting systemd-resolved.service... Feb 9 19:31:48.101687 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:31:48.101728 systemd-journald[189]: Journal started Feb 9 19:31:48.101820 systemd-journald[189]: Runtime Journal (/run/log/journal/41f843545f79cca638c03ca09a76dabc) is 8.0M, max 148.8M, 140.8M free. Feb 9 19:31:48.104285 systemd[1]: Started systemd-journald.service. Feb 9 19:31:48.109965 systemd-modules-load[190]: Inserted module 'overlay' Feb 9 19:31:48.115306 kernel: audit: type=1130 audit(1707507108.109:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.114113 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:31:48.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.123546 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:31:48.147507 kernel: audit: type=1130 audit(1707507108.122:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.147545 kernel: audit: type=1130 audit(1707507108.129:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.147568 kernel: audit: type=1130 audit(1707507108.136:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.130529 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:31:48.139031 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:31:48.147215 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:31:48.160595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:31:48.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.164267 kernel: audit: type=1130 audit(1707507108.159:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.164300 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:31:48.166811 systemd-resolved[191]: Positive Trust Anchors: Feb 9 19:31:48.167146 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:31:48.167361 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:31:48.174424 systemd-resolved[191]: Defaulting to hostname 'linux'. Feb 9 19:31:48.176010 systemd[1]: Started systemd-resolved.service. Feb 9 19:31:48.176149 systemd[1]: Reached target nss-lookup.target. Feb 9 19:31:48.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.180212 kernel: audit: type=1130 audit(1707507108.174:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.180253 kernel: Bridge firewalling registered Feb 9 19:31:48.181235 systemd-modules-load[190]: Inserted module 'br_netfilter' Feb 9 19:31:48.193505 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:31:48.209315 kernel: audit: type=1130 audit(1707507108.196:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.209361 kernel: SCSI subsystem initialized Feb 9 19:31:48.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.198597 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:31:48.220051 dracut-cmdline[205]: dracut-dracut-053 Feb 9 19:31:48.224128 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:31:48.236313 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:31:48.236351 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:31:48.236373 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:31:48.233716 systemd-modules-load[190]: Inserted module 'dm_multipath' Feb 9 19:31:48.234695 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:31:48.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.250583 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:31:48.261309 kernel: audit: type=1130 audit(1707507108.248:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.264701 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:31:48.275322 kernel: audit: type=1130 audit(1707507108.268:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.315209 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:31:48.328215 kernel: iscsi: registered transport (tcp) Feb 9 19:31:48.353436 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:31:48.353518 kernel: QLogic iSCSI HBA Driver Feb 9 19:31:48.397812 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:31:48.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.403578 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:31:48.460255 kernel: raid6: avx2x4 gen() 18131 MB/s Feb 9 19:31:48.477228 kernel: raid6: avx2x4 xor() 8173 MB/s Feb 9 19:31:48.494254 kernel: raid6: avx2x2 gen() 17768 MB/s Feb 9 19:31:48.511226 kernel: raid6: avx2x2 xor() 18511 MB/s Feb 9 19:31:48.528260 kernel: raid6: avx2x1 gen() 13797 MB/s Feb 9 19:31:48.545226 kernel: raid6: avx2x1 xor() 16054 MB/s Feb 9 19:31:48.562222 kernel: raid6: sse2x4 gen() 11034 MB/s Feb 9 19:31:48.579208 kernel: raid6: sse2x4 xor() 6792 MB/s Feb 9 19:31:48.596218 kernel: raid6: sse2x2 gen() 12134 MB/s Feb 9 19:31:48.613219 kernel: raid6: sse2x2 xor() 7434 MB/s Feb 9 19:31:48.630225 kernel: raid6: sse2x1 gen() 10485 MB/s Feb 9 19:31:48.647581 kernel: raid6: sse2x1 xor() 5207 MB/s Feb 9 19:31:48.647624 kernel: raid6: using algorithm avx2x4 gen() 18131 MB/s Feb 9 19:31:48.647646 kernel: raid6: .... xor() 8173 MB/s, rmw enabled Feb 9 19:31:48.648307 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:31:48.663216 kernel: xor: automatically using best checksumming function avx Feb 9 19:31:48.771214 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:31:48.782425 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:31:48.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.782000 audit: BPF prog-id=7 op=LOAD Feb 9 19:31:48.782000 audit: BPF prog-id=8 op=LOAD Feb 9 19:31:48.784676 systemd[1]: Starting systemd-udevd.service... Feb 9 19:31:48.801285 systemd-udevd[387]: Using default interface naming scheme 'v252'. Feb 9 19:31:48.808074 systemd[1]: Started systemd-udevd.service. Feb 9 19:31:48.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.811718 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:31:48.833786 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Feb 9 19:31:48.872981 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:31:48.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:48.874363 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:31:48.937762 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:31:48.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:49.015204 kernel: scsi host0: Virtio SCSI HBA Feb 9 19:31:49.033243 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:31:49.107368 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:31:49.107440 kernel: AES CTR mode by8 optimization enabled Feb 9 19:31:49.111209 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 9 19:31:49.164823 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 9 19:31:49.165140 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 9 19:31:49.165383 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 9 19:31:49.165587 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 9 19:31:49.165797 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 19:31:49.174904 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:31:49.174968 kernel: GPT:17805311 != 25165823 Feb 9 19:31:49.174990 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:31:49.175012 kernel: GPT:17805311 != 25165823 Feb 9 19:31:49.175439 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:31:49.177203 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:31:49.179200 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 9 19:31:49.237219 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (439) Feb 9 19:31:49.242402 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:31:49.254814 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:31:49.260028 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:31:49.260464 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:31:49.272252 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:31:49.274285 systemd[1]: Starting disk-uuid.service... Feb 9 19:31:49.284755 disk-uuid[517]: Primary Header is updated. Feb 9 19:31:49.284755 disk-uuid[517]: Secondary Entries is updated. Feb 9 19:31:49.284755 disk-uuid[517]: Secondary Header is updated. Feb 9 19:31:49.296208 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:31:49.314215 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:31:49.322210 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:31:50.320221 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:31:50.320294 disk-uuid[518]: The operation has completed successfully. Feb 9 19:31:50.386796 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:31:50.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:50.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:50.386927 systemd[1]: Finished disk-uuid.service. Feb 9 19:31:50.401723 systemd[1]: Starting verity-setup.service... Feb 9 19:31:50.429398 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:31:50.500219 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:31:50.510555 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:31:50.522572 systemd[1]: Finished verity-setup.service. Feb 9 19:31:50.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:50.610122 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:31:50.624348 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:31:50.617492 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:31:50.660373 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:31:50.660409 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:31:50.660425 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:31:50.618403 systemd[1]: Starting ignition-setup.service... Feb 9 19:31:50.625588 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:31:50.689356 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:31:50.689975 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:31:50.706048 systemd[1]: Finished ignition-setup.service. Feb 9 19:31:50.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:50.707602 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:31:50.742082 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:31:50.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:50.750000 audit: BPF prog-id=9 op=LOAD Feb 9 19:31:50.752039 systemd[1]: Starting systemd-networkd.service... Feb 9 19:31:50.784599 systemd-networkd[692]: lo: Link UP Feb 9 19:31:50.784612 systemd-networkd[692]: lo: Gained carrier Feb 9 19:31:50.785666 systemd-networkd[692]: Enumeration completed Feb 9 19:31:50.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:50.785787 systemd[1]: Started systemd-networkd.service. Feb 9 19:31:50.786194 systemd-networkd[692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:31:50.788839 systemd-networkd[692]: eth0: Link UP Feb 9 19:31:50.788846 systemd-networkd[692]: eth0: Gained carrier Feb 9 19:31:50.798277 systemd-networkd[692]: eth0: DHCPv4 address 10.128.0.33/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 9 19:31:50.799598 systemd[1]: Reached target network.target. Feb 9 19:31:50.822565 systemd[1]: Starting iscsiuio.service... Feb 9 19:31:50.887518 systemd[1]: Started iscsiuio.service. Feb 9 19:31:50.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:50.895440 systemd[1]: Starting iscsid.service... Feb 9 19:31:50.916297 iscsid[701]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:31:50.916297 iscsid[701]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 19:31:50.916297 iscsid[701]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:31:50.916297 iscsid[701]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:31:50.916297 iscsid[701]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:31:50.916297 iscsid[701]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:31:50.916297 iscsid[701]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:31:50.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:51.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:50.902605 systemd[1]: Started iscsid.service. Feb 9 19:31:51.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:50.973843 ignition[662]: Ignition 2.14.0 Feb 9 19:31:50.984957 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:31:50.973856 ignition[662]: Stage: fetch-offline Feb 9 19:31:51.005890 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:31:50.973929 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:31:51.016739 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:31:50.973967 ignition[662]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:31:51.042688 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:31:51.004405 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:31:51.057310 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:31:51.005942 ignition[662]: parsed url from cmdline: "" Feb 9 19:31:51.063441 systemd[1]: Reached target remote-fs.target. Feb 9 19:31:51.005948 ignition[662]: no config URL provided Feb 9 19:31:51.086440 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:31:51.005957 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:31:51.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:51.108392 systemd[1]: Starting ignition-fetch.service... Feb 9 19:31:51.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:51.005969 ignition[662]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:31:51.157995 unknown[716]: fetched base config from "system" Feb 9 19:31:51.005979 ignition[662]: failed to fetch config: resource requires networking Feb 9 19:31:51.158008 unknown[716]: fetched base config from "system" Feb 9 19:31:51.006753 ignition[662]: Ignition finished successfully Feb 9 19:31:51.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:51.158022 unknown[716]: fetched user config from "gcp" Feb 9 19:31:51.119701 ignition[716]: Ignition 2.14.0 Feb 9 19:31:51.173803 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:31:51.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:51.119712 ignition[716]: Stage: fetch Feb 9 19:31:51.189689 systemd[1]: Finished ignition-fetch.service. Feb 9 19:31:51.119838 ignition[716]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:31:51.208604 systemd[1]: Starting ignition-kargs.service... Feb 9 19:31:51.119868 ignition[716]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:31:51.247735 systemd[1]: Finished ignition-kargs.service. Feb 9 19:31:51.127449 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:31:51.255530 systemd[1]: Starting ignition-disks.service... Feb 9 19:31:51.127694 ignition[716]: parsed url from cmdline: "" Feb 9 19:31:51.275727 systemd[1]: Finished ignition-disks.service. Feb 9 19:31:51.127701 ignition[716]: no config URL provided Feb 9 19:31:51.286696 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:31:51.127708 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:31:51.308331 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:31:51.127718 ignition[716]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:31:51.320319 systemd[1]: Reached target local-fs.target. Feb 9 19:31:51.127752 ignition[716]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 9 19:31:51.333305 systemd[1]: Reached target sysinit.target. Feb 9 19:31:51.136743 ignition[716]: GET result: OK Feb 9 19:31:51.333421 systemd[1]: Reached target basic.target. Feb 9 19:31:51.136829 ignition[716]: parsing config with SHA512: 584333a4c492f0d5259f2f657bc35d393dba53d6137c4c505041dc5ca26a750ba0a240854635093e13f495a6e2ad161e3db63d7d9726793ce74a80d0f126bbd7 Feb 9 19:31:51.356453 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:31:51.158679 ignition[716]: fetch: fetch complete Feb 9 19:31:51.158686 ignition[716]: fetch: fetch passed Feb 9 19:31:51.158730 ignition[716]: Ignition finished successfully Feb 9 19:31:51.221158 ignition[722]: Ignition 2.14.0 Feb 9 19:31:51.221168 ignition[722]: Stage: kargs Feb 9 19:31:51.221337 ignition[722]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:31:51.221366 ignition[722]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:31:51.228522 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:31:51.229727 ignition[722]: kargs: kargs passed Feb 9 19:31:51.229774 ignition[722]: Ignition finished successfully Feb 9 19:31:51.265905 ignition[728]: Ignition 2.14.0 Feb 9 19:31:51.265914 ignition[728]: Stage: disks Feb 9 19:31:51.266041 ignition[728]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:31:51.266074 ignition[728]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:31:51.273255 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:31:51.274468 ignition[728]: disks: disks passed Feb 9 19:31:51.274512 ignition[728]: Ignition finished successfully Feb 9 19:31:51.378610 systemd-fsck[736]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:31:51.550096 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:31:51.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:51.560474 systemd[1]: Mounting sysroot.mount... Feb 9 19:31:51.589550 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:31:51.585392 systemd[1]: Mounted sysroot.mount. Feb 9 19:31:51.596556 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:31:51.616562 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:31:51.633799 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:31:51.633890 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:31:51.633937 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:31:51.654754 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:31:51.673591 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:31:51.730463 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (742) Feb 9 19:31:51.730525 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:31:51.730549 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:31:51.730572 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:31:51.723399 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:31:51.752323 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:31:51.747945 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:31:51.762361 initrd-setup-root[763]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:31:51.772277 initrd-setup-root[773]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:31:51.782431 initrd-setup-root[781]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:31:51.800282 initrd-setup-root[789]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:31:51.830228 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:31:51.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:51.831430 systemd[1]: Starting ignition-mount.service... Feb 9 19:31:51.852318 systemd[1]: Starting sysroot-boot.service... Feb 9 19:31:51.867564 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:31:51.867700 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:31:51.892976 ignition[808]: INFO : Ignition 2.14.0 Feb 9 19:31:51.892976 ignition[808]: INFO : Stage: mount Feb 9 19:31:51.892976 ignition[808]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:31:51.892976 ignition[808]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:31:51.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:51.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:51.893561 systemd[1]: Finished sysroot-boot.service. Feb 9 19:31:51.963356 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:31:51.963356 ignition[808]: INFO : mount: mount passed Feb 9 19:31:51.963356 ignition[808]: INFO : Ignition finished successfully Feb 9 19:31:52.026309 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (817) Feb 9 19:31:52.026351 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:31:52.026375 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:31:52.026406 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:31:52.026436 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:31:51.907706 systemd[1]: Finished ignition-mount.service. Feb 9 19:31:51.925597 systemd[1]: Starting ignition-files.service... Feb 9 19:31:51.960338 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:31:52.057371 ignition[836]: INFO : Ignition 2.14.0 Feb 9 19:31:52.057371 ignition[836]: INFO : Stage: files Feb 9 19:31:52.057371 ignition[836]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:31:52.057371 ignition[836]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:31:52.057371 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:31:52.123321 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (836) Feb 9 19:31:52.021379 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:31:52.131285 ignition[836]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:31:52.131285 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:31:52.131285 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:31:52.131285 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:31:52.131285 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:31:52.131285 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:31:52.131285 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Feb 9 19:31:52.131285 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:31:52.131285 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1983610340" Feb 9 19:31:52.131285 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1983610340": device or resource busy Feb 9 19:31:52.131285 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1983610340", trying btrfs: device or resource busy Feb 9 19:31:52.131285 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1983610340" Feb 9 19:31:52.131285 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1983610340" Feb 9 19:31:52.131285 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem1983610340" Feb 9 19:31:52.131285 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem1983610340" Feb 9 19:31:52.131285 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Feb 9 19:31:52.131285 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:31:52.077481 unknown[836]: wrote ssh authorized keys file for user: core Feb 9 19:31:52.391301 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 19:31:52.391301 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:31:52.662165 ignition[836]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 19:31:52.686321 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:31:52.686321 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:31:52.686321 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:31:52.704351 systemd-networkd[692]: eth0: Gained IPv6LL Feb 9 19:31:52.837266 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:31:52.953994 ignition[836]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 19:31:52.977312 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:31:52.977312 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Feb 9 19:31:52.977312 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:31:52.977312 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem827368805" Feb 9 19:31:52.977312 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem827368805": device or resource busy Feb 9 19:31:52.977312 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem827368805", trying btrfs: device or resource busy Feb 9 19:31:52.977312 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem827368805" Feb 9 19:31:52.977312 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem827368805" Feb 9 19:31:52.977312 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem827368805" Feb 9 19:31:52.977312 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem827368805" Feb 9 19:31:52.977312 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Feb 9 19:31:52.977312 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:31:52.977312 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(d): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:31:52.969235 systemd[1]: mnt-oem827368805.mount: Deactivated successfully. Feb 9 19:31:53.201309 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(d): GET result: OK Feb 9 19:31:53.354614 ignition[836]: DEBUG : files: createFilesystemsFiles: createFiles: op(d): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 9 19:31:53.378324 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:31:53.378324 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:31:53.378324 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:31:53.378324 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Feb 9 19:31:53.912988 ignition[836]: DEBUG : files: createFilesystemsFiles: createFiles: op(e): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2310590313" Feb 9 19:31:53.936338 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(12): op(13): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2310590313": device or resource busy Feb 9 19:31:53.936338 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(12): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2310590313", trying btrfs: device or resource busy Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2310590313" Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(14): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2310590313" Feb 9 19:31:53.936338 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [started] unmounting "/mnt/oem2310590313" Feb 9 19:31:54.414322 kernel: kauditd_printk_skb: 26 callbacks suppressed Feb 9 19:31:54.414382 kernel: audit: type=1130 audit(1707507113.964:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.414409 kernel: audit: type=1130 audit(1707507114.054:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.414434 kernel: audit: type=1130 audit(1707507114.098:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.414459 kernel: audit: type=1131 audit(1707507114.098:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.414473 kernel: audit: type=1130 audit(1707507114.214:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.414487 kernel: audit: type=1131 audit(1707507114.214:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.414501 kernel: audit: type=1130 audit(1707507114.354:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:53.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:53.932941 systemd[1]: mnt-oem2310590313.mount: Deactivated successfully. Feb 9 19:31:54.430360 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): op(15): [finished] unmounting "/mnt/oem2310590313" Feb 9 19:31:54.430360 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Feb 9 19:31:54.430360 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Feb 9 19:31:54.430360 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(16): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:31:54.430360 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(16): op(17): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2806731242" Feb 9 19:31:54.430360 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(16): op(17): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2806731242": device or resource busy Feb 9 19:31:54.430360 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(16): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2806731242", trying btrfs: device or resource busy Feb 9 19:31:54.430360 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(16): op(18): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2806731242" Feb 9 19:31:54.430360 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(16): op(18): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2806731242" Feb 9 19:31:54.430360 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(16): op(19): [started] unmounting "/mnt/oem2806731242" Feb 9 19:31:54.430360 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(16): op(19): [finished] unmounting "/mnt/oem2806731242" Feb 9 19:31:54.430360 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Feb 9 19:31:54.430360 ignition[836]: INFO : files: op(1a): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:31:54.430360 ignition[836]: INFO : files: op(1a): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:31:54.430360 ignition[836]: INFO : files: op(1b): [started] processing unit "oem-gce.service" Feb 9 19:31:54.430360 ignition[836]: INFO : files: op(1b): [finished] processing unit "oem-gce.service" Feb 9 19:31:54.816345 kernel: audit: type=1131 audit(1707507114.474:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.816396 kernel: audit: type=1131 audit(1707507114.761:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:53.953666 systemd[1]: Finished ignition-files.service. Feb 9 19:31:54.851342 kernel: audit: type=1131 audit(1707507114.823:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(1c): [started] processing unit "oem-gce-enable-oslogin.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(1c): [finished] processing unit "oem-gce-enable-oslogin.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(1d): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(1d): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(1f): [started] processing unit "prepare-critools.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(1f): op(20): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(1f): [finished] processing unit "prepare-critools.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(21): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(21): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(22): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(24): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(24): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(25): [started] setting preset to enabled for "oem-gce.service" Feb 9 19:31:54.851590 ignition[836]: INFO : files: op(25): [finished] setting preset to enabled for "oem-gce.service" Feb 9 19:31:54.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:53.975567 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:31:55.261334 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:31:55.282476 iscsid[701]: iscsid shutting down. Feb 9 19:31:55.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.297404 ignition[836]: INFO : files: createResultFile: createFiles: op(26): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:31:55.297404 ignition[836]: INFO : files: createResultFile: createFiles: op(26): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:31:55.297404 ignition[836]: INFO : files: files passed Feb 9 19:31:55.297404 ignition[836]: INFO : Ignition finished successfully Feb 9 19:31:55.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.019329 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:31:54.020390 systemd[1]: Starting ignition-quench.service... Feb 9 19:31:54.034767 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:31:54.055912 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:31:54.056050 systemd[1]: Finished ignition-quench.service. Feb 9 19:31:55.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.099698 systemd[1]: Reached target ignition-complete.target. Feb 9 19:31:55.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.463000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:31:54.170390 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:31:55.488419 ignition[874]: INFO : Ignition 2.14.0 Feb 9 19:31:55.488419 ignition[874]: INFO : Stage: umount Feb 9 19:31:55.488419 ignition[874]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:31:55.488419 ignition[874]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:31:55.488419 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:31:55.488419 ignition[874]: INFO : umount: umount passed Feb 9 19:31:55.488419 ignition[874]: INFO : Ignition finished successfully Feb 9 19:31:55.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.199595 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:31:54.199705 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:31:54.215524 systemd[1]: Reached target initrd-fs.target. Feb 9 19:31:55.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.272520 systemd[1]: Reached target initrd.target. Feb 9 19:31:55.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.297579 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:31:55.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.298886 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:31:54.330643 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:31:55.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.356721 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:31:55.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.408241 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:31:55.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:55.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:54.422570 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:31:54.438606 systemd[1]: Stopped target timers.target. Feb 9 19:31:54.453606 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:31:54.453786 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:31:54.475949 systemd[1]: Stopped target initrd.target. Feb 9 19:31:54.519681 systemd[1]: Stopped target basic.target. Feb 9 19:31:55.809337 systemd-journald[189]: Received SIGTERM from PID 1 (n/a). Feb 9 19:31:54.541680 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:31:54.563660 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:31:54.588635 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:31:54.614657 systemd[1]: Stopped target remote-fs.target. Feb 9 19:31:54.638660 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:31:54.660633 systemd[1]: Stopped target sysinit.target. Feb 9 19:31:54.681623 systemd[1]: Stopped target local-fs.target. Feb 9 19:31:54.702607 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:31:54.726662 systemd[1]: Stopped target swap.target. Feb 9 19:31:54.744587 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:31:54.744767 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:31:54.762796 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:31:54.799586 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:31:54.799768 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:31:54.824740 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:31:54.825016 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:31:54.861642 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:31:54.861817 systemd[1]: Stopped ignition-files.service. Feb 9 19:31:54.875053 systemd[1]: Stopping ignition-mount.service... Feb 9 19:31:54.891775 systemd[1]: Stopping iscsid.service... Feb 9 19:31:54.915288 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:31:54.915537 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:31:54.942756 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:31:54.966338 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:31:54.966609 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:31:54.986643 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:31:54.986818 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:31:55.009208 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:31:55.010099 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:31:55.010223 systemd[1]: Stopped iscsid.service. Feb 9 19:31:55.030047 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:31:55.030157 systemd[1]: Stopped ignition-mount.service. Feb 9 19:31:55.054945 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:31:55.055055 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:31:55.072971 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:31:55.073144 systemd[1]: Stopped ignition-disks.service. Feb 9 19:31:55.086536 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:31:55.086604 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:31:55.105560 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:31:55.105623 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:31:55.124538 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:31:55.124601 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:31:55.143544 systemd[1]: Stopped target paths.target. Feb 9 19:31:55.180398 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:31:55.185266 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:31:55.194484 systemd[1]: Stopped target slices.target. Feb 9 19:31:55.215520 systemd[1]: Stopped target sockets.target. Feb 9 19:31:55.251488 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:31:55.251547 systemd[1]: Closed iscsid.socket. Feb 9 19:31:55.268457 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:31:55.268528 systemd[1]: Stopped ignition-setup.service. Feb 9 19:31:55.290478 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:31:55.290543 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:31:55.305559 systemd[1]: Stopping iscsiuio.service... Feb 9 19:31:55.327712 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:31:55.327833 systemd[1]: Stopped iscsiuio.service. Feb 9 19:31:55.349672 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:31:55.349782 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:31:55.365250 systemd[1]: Stopped target network.target. Feb 9 19:31:55.380328 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:31:55.380399 systemd[1]: Closed iscsiuio.socket. Feb 9 19:31:55.404527 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:31:55.407250 systemd-networkd[692]: eth0: DHCPv6 lease lost Feb 9 19:31:55.819000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:31:55.419455 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:31:55.433775 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:31:55.433908 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:31:55.450112 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:31:55.450258 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:31:55.465088 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:31:55.465127 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:31:55.481299 systemd[1]: Stopping network-cleanup.service... Feb 9 19:31:55.495397 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:31:55.495492 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:31:55.516534 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:31:55.516601 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:31:55.527604 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:31:55.527661 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:31:55.552691 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:31:55.570998 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:31:55.571674 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:31:55.571822 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:31:55.591738 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:31:55.591840 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:31:55.606399 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:31:55.606466 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:31:55.622342 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:31:55.622438 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:31:55.637418 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:31:55.637477 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:31:55.652374 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:31:55.652452 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:31:55.668344 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:31:55.687303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:31:55.687503 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:31:55.704072 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:31:55.704219 systemd[1]: Stopped network-cleanup.service. Feb 9 19:31:55.718735 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:31:55.718845 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:31:55.734658 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:31:55.750393 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:31:55.773440 systemd[1]: Switching root. Feb 9 19:31:55.822050 systemd-journald[189]: Journal stopped Feb 9 19:32:00.373877 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:32:00.374009 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:32:00.374037 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:32:00.374066 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:32:00.374089 kernel: SELinux: policy capability open_perms=1 Feb 9 19:32:00.374110 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:32:00.374134 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:32:00.374157 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:32:00.374208 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:32:00.374230 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:32:00.374253 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:32:00.374278 systemd[1]: Successfully loaded SELinux policy in 109.413ms. Feb 9 19:32:00.374325 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.269ms. Feb 9 19:32:00.374357 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:32:00.374382 systemd[1]: Detected virtualization kvm. Feb 9 19:32:00.374408 systemd[1]: Detected architecture x86-64. Feb 9 19:32:00.374439 systemd[1]: Detected first boot. Feb 9 19:32:00.374467 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:32:00.374489 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:32:00.374511 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:32:00.374534 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:32:00.374559 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:32:00.374585 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:32:00.374616 kernel: kauditd_printk_skb: 48 callbacks suppressed Feb 9 19:32:00.374641 kernel: audit: type=1334 audit(1707507119.498:88): prog-id=12 op=LOAD Feb 9 19:32:00.374661 kernel: audit: type=1334 audit(1707507119.498:89): prog-id=3 op=UNLOAD Feb 9 19:32:00.374683 kernel: audit: type=1334 audit(1707507119.504:90): prog-id=13 op=LOAD Feb 9 19:32:00.374704 kernel: audit: type=1334 audit(1707507119.511:91): prog-id=14 op=LOAD Feb 9 19:32:00.374725 kernel: audit: type=1334 audit(1707507119.511:92): prog-id=4 op=UNLOAD Feb 9 19:32:00.374746 kernel: audit: type=1334 audit(1707507119.511:93): prog-id=5 op=UNLOAD Feb 9 19:32:00.374768 kernel: audit: type=1334 audit(1707507119.518:94): prog-id=15 op=LOAD Feb 9 19:32:00.374789 kernel: audit: type=1334 audit(1707507119.518:95): prog-id=12 op=UNLOAD Feb 9 19:32:00.374814 kernel: audit: type=1334 audit(1707507119.532:96): prog-id=16 op=LOAD Feb 9 19:32:00.374834 kernel: audit: type=1334 audit(1707507119.546:97): prog-id=17 op=LOAD Feb 9 19:32:00.374856 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:32:00.374879 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:32:00.374903 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:32:00.374935 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:32:00.374960 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:32:00.374983 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:32:00.375006 systemd[1]: Created slice system-getty.slice. Feb 9 19:32:00.375034 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:32:00.375057 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:32:00.375080 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:32:00.375103 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:32:00.375128 systemd[1]: Created slice user.slice. Feb 9 19:32:00.375156 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:32:00.375194 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:32:00.375219 systemd[1]: Set up automount boot.automount. Feb 9 19:32:00.375250 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:32:00.375275 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:32:00.375303 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:32:00.375328 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:32:00.375354 systemd[1]: Reached target integritysetup.target. Feb 9 19:32:00.375380 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:32:00.375407 systemd[1]: Reached target remote-fs.target. Feb 9 19:32:00.375436 systemd[1]: Reached target slices.target. Feb 9 19:32:00.375461 systemd[1]: Reached target swap.target. Feb 9 19:32:00.375489 systemd[1]: Reached target torcx.target. Feb 9 19:32:00.375514 systemd[1]: Reached target veritysetup.target. Feb 9 19:32:00.375538 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:32:00.375562 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:32:00.375587 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:32:00.375610 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:32:00.375630 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:32:00.375651 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:32:00.375675 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:32:00.375703 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:32:00.375727 systemd[1]: Mounting media.mount... Feb 9 19:32:00.375749 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:32:00.375772 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:32:00.375794 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:32:00.375820 systemd[1]: Mounting tmp.mount... Feb 9 19:32:00.375844 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:32:00.375868 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:32:00.375890 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:32:00.375927 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:32:00.375950 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:32:00.375972 systemd[1]: Starting modprobe@drm.service... Feb 9 19:32:00.376001 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:32:00.376022 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:32:00.376045 systemd[1]: Starting modprobe@loop.service... Feb 9 19:32:00.376067 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:32:00.376090 kernel: fuse: init (API version 7.34) Feb 9 19:32:00.376112 kernel: loop: module loaded Feb 9 19:32:00.376139 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:32:00.376161 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:32:00.376203 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:32:00.376228 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:32:00.376249 systemd[1]: Stopped systemd-journald.service. Feb 9 19:32:00.376270 systemd[1]: Starting systemd-journald.service... Feb 9 19:32:00.376292 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:32:00.376316 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:32:00.376342 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:32:00.376368 systemd-journald[999]: Journal started Feb 9 19:32:00.376460 systemd-journald[999]: Runtime Journal (/run/log/journal/41f843545f79cca638c03ca09a76dabc) is 8.0M, max 148.8M, 140.8M free. Feb 9 19:31:56.097000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:31:56.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:31:56.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:31:56.243000 audit: BPF prog-id=10 op=LOAD Feb 9 19:31:56.243000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:31:56.243000 audit: BPF prog-id=11 op=LOAD Feb 9 19:31:56.243000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:31:56.420000 audit[907]: AVC avc: denied { associate } for pid=907 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:31:56.420000 audit[907]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=890 pid=907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:31:56.420000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:31:56.430000 audit[907]: AVC avc: denied { associate } for pid=907 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:31:56.430000 audit[907]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=890 pid=907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:31:56.430000 audit: CWD cwd="/" Feb 9 19:31:56.430000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:31:56.430000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:31:56.430000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:31:59.498000 audit: BPF prog-id=12 op=LOAD Feb 9 19:31:59.498000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:31:59.504000 audit: BPF prog-id=13 op=LOAD Feb 9 19:31:59.511000 audit: BPF prog-id=14 op=LOAD Feb 9 19:31:59.511000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:31:59.511000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:31:59.518000 audit: BPF prog-id=15 op=LOAD Feb 9 19:31:59.518000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:31:59.532000 audit: BPF prog-id=16 op=LOAD Feb 9 19:31:59.546000 audit: BPF prog-id=17 op=LOAD Feb 9 19:31:59.546000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:31:59.546000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:31:59.553000 audit: BPF prog-id=18 op=LOAD Feb 9 19:31:59.553000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:31:59.560000 audit: BPF prog-id=19 op=LOAD Feb 9 19:31:59.567000 audit: BPF prog-id=20 op=LOAD Feb 9 19:31:59.567000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:31:59.567000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:31:59.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:59.589000 audit: BPF prog-id=18 op=UNLOAD Feb 9 19:31:59.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:31:59.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.326000 audit: BPF prog-id=21 op=LOAD Feb 9 19:32:00.326000 audit: BPF prog-id=22 op=LOAD Feb 9 19:32:00.326000 audit: BPF prog-id=23 op=LOAD Feb 9 19:32:00.326000 audit: BPF prog-id=19 op=UNLOAD Feb 9 19:32:00.326000 audit: BPF prog-id=20 op=UNLOAD Feb 9 19:32:00.370000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:32:00.370000 audit[999]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff95b9ff40 a2=4000 a3=7fff95b9ffdc items=0 ppid=1 pid=999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:32:00.370000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:31:56.417917 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:31:59.498167 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:31:56.418975 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:31:59.569858 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:31:56.419000 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:31:56.419039 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:31:56.419051 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:31:56.419091 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:31:56.419107 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:31:56.419352 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:31:56.419405 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:31:56.419422 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:31:56.420584 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:31:56.420647 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:31:56.420682 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:31:56.420718 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:31:56.420753 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:31:56.420780 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:31:58.905343 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:58Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:31:58.905650 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:58Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:31:58.906463 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:58Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:31:58.907073 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:58Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:31:58.907166 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:58Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:31:58.907290 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-02-09T19:31:58Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:32:00.398221 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:32:00.416426 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:32:00.416535 systemd[1]: Stopped verity-setup.service. Feb 9 19:32:00.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.436211 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:32:00.445220 systemd[1]: Started systemd-journald.service. Feb 9 19:32:00.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.454752 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:32:00.462516 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:32:00.469494 systemd[1]: Mounted media.mount. Feb 9 19:32:00.476472 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:32:00.485466 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:32:00.494414 systemd[1]: Mounted tmp.mount. Feb 9 19:32:00.502542 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:32:00.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.511697 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:32:00.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.520620 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:32:00.520828 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:32:00.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.529749 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:32:00.529954 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:32:00.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.538689 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:32:00.538892 systemd[1]: Finished modprobe@drm.service. Feb 9 19:32:00.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.548696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:32:00.548909 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:32:00.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.558702 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:32:00.558904 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:32:00.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.568753 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:32:00.568956 systemd[1]: Finished modprobe@loop.service. Feb 9 19:32:00.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.577691 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:32:00.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.586662 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:32:00.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.595654 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:32:00.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.604651 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:32:00.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.613962 systemd[1]: Reached target network-pre.target. Feb 9 19:32:00.623627 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:32:00.633597 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:32:00.641293 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:32:00.643869 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:32:00.652631 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:32:00.661327 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:32:00.662941 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:32:00.670342 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:32:00.671959 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:32:00.677792 systemd-journald[999]: Time spent on flushing to /var/log/journal/41f843545f79cca638c03ca09a76dabc is 64.355ms for 1168 entries. Feb 9 19:32:00.677792 systemd-journald[999]: System Journal (/var/log/journal/41f843545f79cca638c03ca09a76dabc) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:32:00.776608 systemd-journald[999]: Received client request to flush runtime journal. Feb 9 19:32:00.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:00.687544 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:32:00.696837 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:32:00.707763 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:32:00.779116 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:32:00.716524 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:32:00.724645 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:32:00.733717 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:32:00.746018 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:32:00.754857 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:32:00.777895 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:32:00.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:01.334154 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:32:01.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:01.342000 audit: BPF prog-id=24 op=LOAD Feb 9 19:32:01.342000 audit: BPF prog-id=25 op=LOAD Feb 9 19:32:01.342000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:32:01.342000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:32:01.344253 systemd[1]: Starting systemd-udevd.service... Feb 9 19:32:01.367123 systemd-udevd[1016]: Using default interface naming scheme 'v252'. Feb 9 19:32:01.409669 systemd[1]: Started systemd-udevd.service. Feb 9 19:32:01.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:01.419000 audit: BPF prog-id=26 op=LOAD Feb 9 19:32:01.421639 systemd[1]: Starting systemd-networkd.service... Feb 9 19:32:01.435000 audit: BPF prog-id=27 op=LOAD Feb 9 19:32:01.436000 audit: BPF prog-id=28 op=LOAD Feb 9 19:32:01.436000 audit: BPF prog-id=29 op=LOAD Feb 9 19:32:01.438319 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:32:01.486695 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:32:01.515592 systemd[1]: Started systemd-userdbd.service. Feb 9 19:32:01.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:01.628904 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:32:01.672429 systemd-networkd[1029]: lo: Link UP Feb 9 19:32:01.672442 systemd-networkd[1029]: lo: Gained carrier Feb 9 19:32:01.673212 systemd-networkd[1029]: Enumeration completed Feb 9 19:32:01.673369 systemd[1]: Started systemd-networkd.service. Feb 9 19:32:01.673391 systemd-networkd[1029]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:32:01.675583 systemd-networkd[1029]: eth0: Link UP Feb 9 19:32:01.675762 systemd-networkd[1029]: eth0: Gained carrier Feb 9 19:32:01.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:01.634000 audit[1026]: AVC avc: denied { confidentiality } for pid=1026 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:32:01.688383 systemd-networkd[1029]: eth0: DHCPv4 address 10.128.0.33/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 9 19:32:01.634000 audit[1026]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f7ee460e60 a1=32194 a2=7ff072152bc5 a3=5 items=108 ppid=1016 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:32:01.634000 audit: CWD cwd="/" Feb 9 19:32:01.634000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=1 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=2 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=3 name=(null) inode=14654 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=4 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=5 name=(null) inode=14655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=6 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=7 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=8 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.710248 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1028) Feb 9 19:32:01.634000 audit: PATH item=9 name=(null) inode=14657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=10 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=11 name=(null) inode=14658 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=12 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=13 name=(null) inode=14659 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=14 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=15 name=(null) inode=14660 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=16 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=17 name=(null) inode=14661 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=18 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=19 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=20 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=21 name=(null) inode=14663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=22 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=23 name=(null) inode=14664 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=24 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=25 name=(null) inode=14665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=26 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=27 name=(null) inode=14666 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=28 name=(null) inode=14662 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=29 name=(null) inode=14667 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=30 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=31 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=32 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=33 name=(null) inode=14669 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=34 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=35 name=(null) inode=14670 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=36 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=37 name=(null) inode=14671 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=38 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=39 name=(null) inode=14672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=40 name=(null) inode=14668 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=41 name=(null) inode=14673 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=42 name=(null) inode=14653 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=43 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=44 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=45 name=(null) inode=14675 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=46 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=47 name=(null) inode=14676 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=48 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=49 name=(null) inode=14677 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=50 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=51 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=52 name=(null) inode=14674 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=53 name=(null) inode=14679 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=55 name=(null) inode=14680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=56 name=(null) inode=14680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=57 name=(null) inode=14681 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=58 name=(null) inode=14680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=59 name=(null) inode=14682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=60 name=(null) inode=14680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=61 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=62 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=63 name=(null) inode=14684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=64 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=65 name=(null) inode=14685 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=66 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=67 name=(null) inode=14686 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=68 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=69 name=(null) inode=14687 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=70 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=71 name=(null) inode=14688 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=72 name=(null) inode=14680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=73 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=74 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=75 name=(null) inode=14690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=76 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=77 name=(null) inode=14691 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=78 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=79 name=(null) inode=14692 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=80 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=81 name=(null) inode=14693 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=82 name=(null) inode=14689 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=83 name=(null) inode=14694 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=84 name=(null) inode=14680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=85 name=(null) inode=14695 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=86 name=(null) inode=14695 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=87 name=(null) inode=14696 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=88 name=(null) inode=14695 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=89 name=(null) inode=14697 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=90 name=(null) inode=14695 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=91 name=(null) inode=14698 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=92 name=(null) inode=14695 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=93 name=(null) inode=14699 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=94 name=(null) inode=14695 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=95 name=(null) inode=14700 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=96 name=(null) inode=14680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=97 name=(null) inode=14701 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=98 name=(null) inode=14701 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=99 name=(null) inode=14702 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=100 name=(null) inode=14701 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=101 name=(null) inode=14703 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=102 name=(null) inode=14701 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=103 name=(null) inode=14704 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=104 name=(null) inode=14701 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=105 name=(null) inode=14705 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=106 name=(null) inode=14701 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PATH item=107 name=(null) inode=14706 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:32:01.634000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:32:01.773543 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:32:01.773652 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 9 19:32:01.778348 kernel: ACPI: button: Sleep Button [SLPF] Feb 9 19:32:01.782536 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:32:01.798877 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 9 19:32:01.800003 kernel: EDAC MC: Ver: 3.0.0 Feb 9 19:32:01.813213 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 9 19:32:01.826216 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:32:01.847692 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:32:01.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:01.857929 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:32:01.886977 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:32:01.919489 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:32:01.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:01.928526 systemd[1]: Reached target cryptsetup.target. Feb 9 19:32:01.938764 systemd[1]: Starting lvm2-activation.service... Feb 9 19:32:01.944882 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:32:01.972541 systemd[1]: Finished lvm2-activation.service. Feb 9 19:32:01.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:01.981487 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:32:01.990292 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:32:01.990340 systemd[1]: Reached target local-fs.target. Feb 9 19:32:01.998295 systemd[1]: Reached target machines.target. Feb 9 19:32:02.007741 systemd[1]: Starting ldconfig.service... Feb 9 19:32:02.015293 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:32:02.015386 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:32:02.016929 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:32:02.025784 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:32:02.037376 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:32:02.037775 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:32:02.037872 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:32:02.039591 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:32:02.040431 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1056 (bootctl) Feb 9 19:32:02.042770 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:32:02.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:02.063944 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:32:02.085575 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:32:02.096037 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:32:02.111775 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:32:02.197577 systemd-fsck[1065]: fsck.fat 4.2 (2021-01-31) Feb 9 19:32:02.197577 systemd-fsck[1065]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:32:02.200614 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:32:02.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:02.212323 systemd[1]: Mounting boot.mount... Feb 9 19:32:02.241097 systemd[1]: Mounted boot.mount. Feb 9 19:32:02.266314 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:32:02.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:02.368857 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:32:02.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:02.379771 systemd[1]: Starting audit-rules.service... Feb 9 19:32:02.390024 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:32:02.400045 systemd[1]: Starting oem-gce-enable-oslogin.service... Feb 9 19:32:02.411144 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:32:02.420000 audit: BPF prog-id=30 op=LOAD Feb 9 19:32:02.422543 systemd[1]: Starting systemd-resolved.service... Feb 9 19:32:02.430000 audit: BPF prog-id=31 op=LOAD Feb 9 19:32:02.433105 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:32:02.441887 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:32:02.450000 audit[1087]: SYSTEM_BOOT pid=1087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:32:02.451507 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:32:02.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:02.464103 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:32:02.469559 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:32:02.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:02.539485 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Feb 9 19:32:02.539749 systemd[1]: Finished oem-gce-enable-oslogin.service. Feb 9 19:32:02.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:02.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:02.558788 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:32:02.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:32:02.573003 augenrules[1104]: No rules Feb 9 19:32:02.571000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:32:02.571000 audit[1104]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffea51fcb80 a2=420 a3=0 items=0 ppid=1068 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:32:02.571000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:32:02.574407 systemd[1]: Finished audit-rules.service. Feb 9 19:32:02.585893 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:32:02.587322 systemd-timesyncd[1085]: Contacted time server 169.254.169.254:123 (169.254.169.254). Feb 9 19:32:02.587392 systemd-timesyncd[1085]: Initial clock synchronization to Fri 2024-02-09 19:32:02.674082 UTC. Feb 9 19:32:02.594637 systemd-resolved[1083]: Positive Trust Anchors: Feb 9 19:32:02.594658 systemd-resolved[1083]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:32:02.594721 systemd-resolved[1083]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:32:02.595651 systemd[1]: Reached target time-set.target. Feb 9 19:32:02.629276 systemd-resolved[1083]: Defaulting to hostname 'linux'. Feb 9 19:32:02.633116 systemd[1]: Started systemd-resolved.service. Feb 9 19:32:02.641408 systemd[1]: Reached target network.target. Feb 9 19:32:02.650332 systemd[1]: Reached target nss-lookup.target. Feb 9 19:32:02.801935 ldconfig[1055]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:32:02.879847 systemd[1]: Finished ldconfig.service. Feb 9 19:32:02.889687 systemd[1]: Starting systemd-update-done.service... Feb 9 19:32:02.898393 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:32:02.899518 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:32:02.909726 systemd[1]: Finished systemd-update-done.service. Feb 9 19:32:02.918524 systemd[1]: Reached target sysinit.target. Feb 9 19:32:02.927445 systemd[1]: Started motdgen.path. Feb 9 19:32:02.934391 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:32:02.944589 systemd[1]: Started logrotate.timer. Feb 9 19:32:02.951462 systemd[1]: Started mdadm.timer. Feb 9 19:32:02.958319 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:32:02.966331 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:32:02.966393 systemd[1]: Reached target paths.target. Feb 9 19:32:02.973293 systemd[1]: Reached target timers.target. Feb 9 19:32:02.980755 systemd[1]: Listening on dbus.socket. Feb 9 19:32:02.989675 systemd[1]: Starting docker.socket... Feb 9 19:32:02.999908 systemd[1]: Listening on sshd.socket. Feb 9 19:32:03.007434 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:32:03.008112 systemd[1]: Listening on docker.socket. Feb 9 19:32:03.015449 systemd[1]: Reached target sockets.target. Feb 9 19:32:03.024310 systemd[1]: Reached target basic.target. Feb 9 19:32:03.031385 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:32:03.031437 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:32:03.032948 systemd[1]: Starting containerd.service... Feb 9 19:32:03.041733 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:32:03.052087 systemd[1]: Starting dbus.service... Feb 9 19:32:03.058646 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:32:03.068961 systemd[1]: Starting extend-filesystems.service... Feb 9 19:32:03.076337 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:32:03.078161 systemd[1]: Starting motdgen.service... Feb 9 19:32:03.087480 systemd[1]: Starting oem-gce.service... Feb 9 19:32:03.089349 jq[1116]: false Feb 9 19:32:03.095874 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:32:03.106110 systemd[1]: Starting prepare-critools.service... Feb 9 19:32:03.115044 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:32:03.124070 systemd[1]: Starting sshd-keygen.service... Feb 9 19:32:03.135104 systemd[1]: Starting systemd-logind.service... Feb 9 19:32:03.142345 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:32:03.142456 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 9 19:32:03.143175 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:32:03.144364 systemd[1]: Starting update-engine.service... Feb 9 19:32:03.148502 extend-filesystems[1117]: Found sda Feb 9 19:32:03.197425 extend-filesystems[1117]: Found sda1 Feb 9 19:32:03.197425 extend-filesystems[1117]: Found sda2 Feb 9 19:32:03.197425 extend-filesystems[1117]: Found sda3 Feb 9 19:32:03.197425 extend-filesystems[1117]: Found usr Feb 9 19:32:03.197425 extend-filesystems[1117]: Found sda4 Feb 9 19:32:03.197425 extend-filesystems[1117]: Found sda6 Feb 9 19:32:03.197425 extend-filesystems[1117]: Found sda7 Feb 9 19:32:03.197425 extend-filesystems[1117]: Found sda9 Feb 9 19:32:03.197425 extend-filesystems[1117]: Checking size of /dev/sda9 Feb 9 19:32:03.153999 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:32:03.166746 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:32:03.261697 jq[1140]: true Feb 9 19:32:03.167017 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:32:03.167591 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:32:03.167832 systemd[1]: Finished motdgen.service. Feb 9 19:32:03.263291 tar[1145]: ./ Feb 9 19:32:03.263291 tar[1145]: ./loopback Feb 9 19:32:03.179406 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:32:03.179691 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:32:03.264113 jq[1147]: true Feb 9 19:32:03.264344 systemd-networkd[1029]: eth0: Gained IPv6LL Feb 9 19:32:03.266761 mkfs.ext4[1149]: mke2fs 1.46.5 (30-Dec-2021) Feb 9 19:32:03.266761 mkfs.ext4[1149]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Feb 9 19:32:03.266761 mkfs.ext4[1149]: Creating filesystem with 262144 4k blocks and 65536 inodes Feb 9 19:32:03.266761 mkfs.ext4[1149]: Filesystem UUID: 9920a112-c72c-4c07-91c0-2417b590cae5 Feb 9 19:32:03.266761 mkfs.ext4[1149]: Superblock backups stored on blocks: Feb 9 19:32:03.266761 mkfs.ext4[1149]: 32768, 98304, 163840, 229376 Feb 9 19:32:03.266761 mkfs.ext4[1149]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 9 19:32:03.266761 mkfs.ext4[1149]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 9 19:32:03.266761 mkfs.ext4[1149]: Creating journal (8192 blocks): done Feb 9 19:32:03.272167 extend-filesystems[1117]: Resized partition /dev/sda9 Feb 9 19:32:03.280351 mkfs.ext4[1149]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 9 19:32:03.292752 extend-filesystems[1161]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:32:03.310272 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 9 19:32:03.310403 tar[1146]: crictl Feb 9 19:32:03.306968 systemd[1]: Started dbus.service. Feb 9 19:32:03.306675 dbus-daemon[1115]: [system] SELinux support is enabled Feb 9 19:32:03.321637 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:32:03.321708 systemd[1]: Reached target system-config.target. Feb 9 19:32:03.324388 dbus-daemon[1115]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1029 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:32:03.331368 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:32:03.331412 systemd[1]: Reached target user-config.target. Feb 9 19:32:03.343268 dbus-daemon[1115]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:32:03.343840 umount[1167]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Feb 9 19:32:03.350784 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:32:03.364431 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 9 19:32:03.387008 update_engine[1138]: I0209 19:32:03.373436 1138 main.cc:92] Flatcar Update Engine starting Feb 9 19:32:03.387008 update_engine[1138]: I0209 19:32:03.382072 1138 update_check_scheduler.cc:74] Next update check in 7m15s Feb 9 19:32:03.382005 systemd[1]: Started update-engine.service. Feb 9 19:32:03.393269 systemd[1]: Started locksmithd.service. Feb 9 19:32:03.400956 extend-filesystems[1161]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 9 19:32:03.400956 extend-filesystems[1161]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 9 19:32:03.400956 extend-filesystems[1161]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 9 19:32:03.474388 kernel: loop0: detected capacity change from 0 to 2097152 Feb 9 19:32:03.474452 kernel: EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:32:03.474494 extend-filesystems[1117]: Resized filesystem in /dev/sda9 Feb 9 19:32:03.407147 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:32:03.483974 bash[1178]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:32:03.407455 systemd[1]: Finished extend-filesystems.service. Feb 9 19:32:03.433340 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:32:03.659373 env[1148]: time="2024-02-09T19:32:03.659255831Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:32:03.664775 tar[1145]: ./bandwidth Feb 9 19:32:03.732465 dbus-daemon[1115]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:32:03.732673 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:32:03.733444 dbus-daemon[1115]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1179 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:32:03.744273 systemd-logind[1135]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:32:03.747903 systemd[1]: Starting polkit.service... Feb 9 19:32:03.753731 systemd-logind[1135]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 19:32:03.753904 systemd-logind[1135]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:32:03.760353 systemd-logind[1135]: New seat seat0. Feb 9 19:32:03.772839 systemd[1]: Started systemd-logind.service. Feb 9 19:32:03.783613 coreos-metadata[1114]: Feb 09 19:32:03.783 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 9 19:32:03.789693 coreos-metadata[1114]: Feb 09 19:32:03.789 INFO Fetch failed with 404: resource not found Feb 9 19:32:03.790025 coreos-metadata[1114]: Feb 09 19:32:03.789 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 9 19:32:03.792059 coreos-metadata[1114]: Feb 09 19:32:03.791 INFO Fetch successful Feb 9 19:32:03.792296 coreos-metadata[1114]: Feb 09 19:32:03.792 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 9 19:32:03.793546 coreos-metadata[1114]: Feb 09 19:32:03.793 INFO Fetch failed with 404: resource not found Feb 9 19:32:03.793761 coreos-metadata[1114]: Feb 09 19:32:03.793 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 9 19:32:03.794652 coreos-metadata[1114]: Feb 09 19:32:03.794 INFO Fetch failed with 404: resource not found Feb 9 19:32:03.794870 coreos-metadata[1114]: Feb 09 19:32:03.794 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 9 19:32:03.796920 coreos-metadata[1114]: Feb 09 19:32:03.796 INFO Fetch successful Feb 9 19:32:03.799163 unknown[1114]: wrote ssh authorized keys file for user: core Feb 9 19:32:03.821537 update-ssh-keys[1193]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:32:03.822575 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:32:03.859075 env[1148]: time="2024-02-09T19:32:03.859020463Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:32:03.861114 polkitd[1192]: Started polkitd version 121 Feb 9 19:32:03.864811 env[1148]: time="2024-02-09T19:32:03.864775730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:32:03.868651 env[1148]: time="2024-02-09T19:32:03.868605655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:32:03.868810 env[1148]: time="2024-02-09T19:32:03.868777433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:32:03.882609 env[1148]: time="2024-02-09T19:32:03.881324384Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:32:03.882609 env[1148]: time="2024-02-09T19:32:03.881363534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:32:03.882609 env[1148]: time="2024-02-09T19:32:03.881385853Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:32:03.882609 env[1148]: time="2024-02-09T19:32:03.881401538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:32:03.882609 env[1148]: time="2024-02-09T19:32:03.881574466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:32:03.882609 env[1148]: time="2024-02-09T19:32:03.881899263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:32:03.882609 env[1148]: time="2024-02-09T19:32:03.882232082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:32:03.882609 env[1148]: time="2024-02-09T19:32:03.882258753Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:32:03.882609 env[1148]: time="2024-02-09T19:32:03.882350176Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:32:03.882609 env[1148]: time="2024-02-09T19:32:03.882392278Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:32:03.888108 polkitd[1192]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:32:03.888213 polkitd[1192]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:32:03.888802 env[1148]: time="2024-02-09T19:32:03.888763176Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:32:03.888897 env[1148]: time="2024-02-09T19:32:03.888813660Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:32:03.888897 env[1148]: time="2024-02-09T19:32:03.888836981Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:32:03.888994 env[1148]: time="2024-02-09T19:32:03.888893834Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:32:03.888994 env[1148]: time="2024-02-09T19:32:03.888918452Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:32:03.889114 env[1148]: time="2024-02-09T19:32:03.888992188Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:32:03.889114 env[1148]: time="2024-02-09T19:32:03.889018305Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:32:03.889114 env[1148]: time="2024-02-09T19:32:03.889043030Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:32:03.889114 env[1148]: time="2024-02-09T19:32:03.889065025Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:32:03.889114 env[1148]: time="2024-02-09T19:32:03.889088960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:32:03.889367 env[1148]: time="2024-02-09T19:32:03.889111264Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:32:03.889367 env[1148]: time="2024-02-09T19:32:03.889133806Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:32:03.889367 env[1148]: time="2024-02-09T19:32:03.889292603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:32:03.889509 env[1148]: time="2024-02-09T19:32:03.889417007Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:32:03.889915 env[1148]: time="2024-02-09T19:32:03.889885071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:32:03.890000 env[1148]: time="2024-02-09T19:32:03.889933912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890000 env[1148]: time="2024-02-09T19:32:03.889963688Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:32:03.890124 env[1148]: time="2024-02-09T19:32:03.890031749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890124 env[1148]: time="2024-02-09T19:32:03.890055299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890124 env[1148]: time="2024-02-09T19:32:03.890077069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890124 env[1148]: time="2024-02-09T19:32:03.890096146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890124 env[1148]: time="2024-02-09T19:32:03.890119136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890393 env[1148]: time="2024-02-09T19:32:03.890139778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890393 env[1148]: time="2024-02-09T19:32:03.890160647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890393 env[1148]: time="2024-02-09T19:32:03.890196017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890393 env[1148]: time="2024-02-09T19:32:03.890220666Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:32:03.890591 env[1148]: time="2024-02-09T19:32:03.890404390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890591 env[1148]: time="2024-02-09T19:32:03.890432238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890591 env[1148]: time="2024-02-09T19:32:03.890454106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.890591 env[1148]: time="2024-02-09T19:32:03.890475229Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:32:03.890591 env[1148]: time="2024-02-09T19:32:03.890499880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:32:03.890591 env[1148]: time="2024-02-09T19:32:03.890520055Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:32:03.890591 env[1148]: time="2024-02-09T19:32:03.890557382Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:32:03.890900 env[1148]: time="2024-02-09T19:32:03.890609995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:32:03.891008 env[1148]: time="2024-02-09T19:32:03.890922741Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:32:03.891008 env[1148]: time="2024-02-09T19:32:03.891022376Z" level=info msg="Connect containerd service" Feb 9 19:32:03.894446 env[1148]: time="2024-02-09T19:32:03.891077107Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:32:03.898372 polkitd[1192]: Finished loading, compiling and executing 2 rules Feb 9 19:32:03.899228 env[1148]: time="2024-02-09T19:32:03.898691437Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:32:03.899228 env[1148]: time="2024-02-09T19:32:03.898834388Z" level=info msg="Start subscribing containerd event" Feb 9 19:32:03.899228 env[1148]: time="2024-02-09T19:32:03.898893551Z" level=info msg="Start recovering state" Feb 9 19:32:03.899228 env[1148]: time="2024-02-09T19:32:03.898977877Z" level=info msg="Start event monitor" Feb 9 19:32:03.899228 env[1148]: time="2024-02-09T19:32:03.899010292Z" level=info msg="Start snapshots syncer" Feb 9 19:32:03.899228 env[1148]: time="2024-02-09T19:32:03.899025928Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:32:03.899228 env[1148]: time="2024-02-09T19:32:03.899037251Z" level=info msg="Start streaming server" Feb 9 19:32:03.899063 dbus-daemon[1115]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:32:03.899682 env[1148]: time="2024-02-09T19:32:03.899584047Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:32:03.899743 env[1148]: time="2024-02-09T19:32:03.899728004Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:32:03.899946 systemd[1]: Started containerd.service. Feb 9 19:32:03.901571 polkitd[1192]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:32:03.902306 env[1148]: time="2024-02-09T19:32:03.900226020Z" level=info msg="containerd successfully booted in 0.312901s" Feb 9 19:32:03.908716 systemd[1]: Started polkit.service. Feb 9 19:32:03.919418 tar[1145]: ./ptp Feb 9 19:32:03.939129 systemd-hostnamed[1179]: Hostname set to (transient) Feb 9 19:32:03.942049 systemd-resolved[1083]: System hostname changed to 'ci-3510-3-2-71c5529006476f7c4aee.c.flatcar-212911.internal'. Feb 9 19:32:04.047881 tar[1145]: ./vlan Feb 9 19:32:04.170393 tar[1145]: ./host-device Feb 9 19:32:04.283088 tar[1145]: ./tuning Feb 9 19:32:04.384655 tar[1145]: ./vrf Feb 9 19:32:04.492833 tar[1145]: ./sbr Feb 9 19:32:04.597403 tar[1145]: ./tap Feb 9 19:32:04.716644 tar[1145]: ./dhcp Feb 9 19:32:05.023833 tar[1145]: ./static Feb 9 19:32:05.099036 systemd[1]: Finished prepare-critools.service. Feb 9 19:32:05.103984 tar[1145]: ./firewall Feb 9 19:32:05.168012 tar[1145]: ./macvlan Feb 9 19:32:05.257465 tar[1145]: ./dummy Feb 9 19:32:05.334478 tar[1145]: ./bridge Feb 9 19:32:05.410386 tar[1145]: ./ipvlan Feb 9 19:32:05.523768 tar[1145]: ./portmap Feb 9 19:32:05.631441 tar[1145]: ./host-local Feb 9 19:32:05.738661 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:32:08.297905 sshd_keygen[1143]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:32:08.306353 locksmithd[1182]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:32:08.346731 systemd[1]: Finished sshd-keygen.service. Feb 9 19:32:08.357837 systemd[1]: Starting issuegen.service... Feb 9 19:32:08.368306 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:32:08.368552 systemd[1]: Finished issuegen.service. Feb 9 19:32:08.377700 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:32:08.390766 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:32:08.401635 systemd[1]: Started getty@tty1.service. Feb 9 19:32:08.412108 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:32:08.420718 systemd[1]: Reached target getty.target. Feb 9 19:32:09.532224 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Feb 9 19:32:11.556237 kernel: loop0: detected capacity change from 0 to 2097152 Feb 9 19:32:11.581532 systemd-nspawn[1224]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Feb 9 19:32:11.581532 systemd-nspawn[1224]: Press ^] three times within 1s to kill container. Feb 9 19:32:11.596233 kernel: EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:32:11.676248 systemd[1]: Started oem-gce.service. Feb 9 19:32:11.683772 systemd[1]: Reached target multi-user.target. Feb 9 19:32:11.694396 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:32:11.707432 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:32:11.707665 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:32:11.717504 systemd[1]: Startup finished in 996ms (kernel) + 8.168s (initrd) + 15.744s (userspace) = 24.909s. Feb 9 19:32:11.766050 systemd-nspawn[1224]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 9 19:32:11.766050 systemd-nspawn[1224]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 9 19:32:11.766302 systemd-nspawn[1224]: + /usr/bin/google_instance_setup Feb 9 19:32:12.425462 instance-setup[1230]: INFO Running google_set_multiqueue. Feb 9 19:32:12.439732 instance-setup[1230]: INFO Set channels for eth0 to 2. Feb 9 19:32:12.443419 instance-setup[1230]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 9 19:32:12.444724 instance-setup[1230]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 9 19:32:12.445172 instance-setup[1230]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 9 19:32:12.446660 instance-setup[1230]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 9 19:32:12.447002 instance-setup[1230]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 9 19:32:12.448341 instance-setup[1230]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 9 19:32:12.448775 instance-setup[1230]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 9 19:32:12.450217 instance-setup[1230]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 9 19:32:12.461107 instance-setup[1230]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 9 19:32:12.461510 instance-setup[1230]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 9 19:32:12.500265 systemd-nspawn[1224]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 9 19:32:12.743138 systemd[1]: Created slice system-sshd.slice. Feb 9 19:32:12.745065 systemd[1]: Started sshd@0-10.128.0.33:22-147.75.109.163:44950.service. Feb 9 19:32:12.843781 startup-script[1261]: INFO Starting startup scripts. Feb 9 19:32:12.856106 startup-script[1261]: INFO No startup scripts found in metadata. Feb 9 19:32:12.856291 startup-script[1261]: INFO Finished running startup scripts. Feb 9 19:32:12.890037 systemd-nspawn[1224]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 9 19:32:12.890037 systemd-nspawn[1224]: + daemon_pids=() Feb 9 19:32:12.890708 systemd-nspawn[1224]: + for d in accounts clock_skew network Feb 9 19:32:12.890708 systemd-nspawn[1224]: + daemon_pids+=($!) Feb 9 19:32:12.890708 systemd-nspawn[1224]: + for d in accounts clock_skew network Feb 9 19:32:12.890708 systemd-nspawn[1224]: + daemon_pids+=($!) Feb 9 19:32:12.890708 systemd-nspawn[1224]: + for d in accounts clock_skew network Feb 9 19:32:12.891020 systemd-nspawn[1224]: + daemon_pids+=($!) Feb 9 19:32:12.891020 systemd-nspawn[1224]: + NOTIFY_SOCKET=/run/systemd/notify Feb 9 19:32:12.891205 systemd-nspawn[1224]: + /usr/bin/systemd-notify --ready Feb 9 19:32:12.892229 systemd-nspawn[1224]: + /usr/bin/google_clock_skew_daemon Feb 9 19:32:12.892229 systemd-nspawn[1224]: + /usr/bin/google_network_daemon Feb 9 19:32:12.892229 systemd-nspawn[1224]: + /usr/bin/google_accounts_daemon Feb 9 19:32:12.946523 systemd-nspawn[1224]: + wait -n 36 37 38 Feb 9 19:32:13.069870 sshd[1265]: Accepted publickey for core from 147.75.109.163 port 44950 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:32:13.072727 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:32:13.091018 systemd[1]: Created slice user-500.slice. Feb 9 19:32:13.095373 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:32:13.100632 systemd-logind[1135]: New session 1 of user core. Feb 9 19:32:13.113413 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:32:13.116056 systemd[1]: Starting user@500.service... Feb 9 19:32:13.156075 (systemd)[1272]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:32:13.368247 systemd[1272]: Queued start job for default target default.target. Feb 9 19:32:13.369056 systemd[1272]: Reached target paths.target. Feb 9 19:32:13.369101 systemd[1272]: Reached target sockets.target. Feb 9 19:32:13.369124 systemd[1272]: Reached target timers.target. Feb 9 19:32:13.369145 systemd[1272]: Reached target basic.target. Feb 9 19:32:13.369234 systemd[1272]: Reached target default.target. Feb 9 19:32:13.369288 systemd[1272]: Startup finished in 194ms. Feb 9 19:32:13.369436 systemd[1]: Started user@500.service. Feb 9 19:32:13.371028 systemd[1]: Started session-1.scope. Feb 9 19:32:13.597909 systemd[1]: Started sshd@1-10.128.0.33:22-147.75.109.163:44952.service. Feb 9 19:32:13.721472 groupadd[1289]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 9 19:32:13.725426 groupadd[1289]: group added to /etc/gshadow: name=google-sudoers Feb 9 19:32:13.754683 groupadd[1289]: new group: name=google-sudoers, GID=1000 Feb 9 19:32:13.787975 google-accounts[1267]: INFO Starting Google Accounts daemon. Feb 9 19:32:13.831993 google-clock-skew[1268]: INFO Starting Google Clock Skew daemon. Feb 9 19:32:13.845047 google-accounts[1267]: WARNING OS Login not installed. Feb 9 19:32:13.854075 google-clock-skew[1268]: INFO Clock drift token has changed: 0. Feb 9 19:32:13.854781 google-accounts[1267]: INFO Creating a new user account for 0. Feb 9 19:32:13.858800 systemd-nspawn[1224]: hwclock: Cannot access the Hardware Clock via any known method. Feb 9 19:32:13.859106 systemd-nspawn[1224]: hwclock: Use the --verbose option to see the details of our search for an access method. Feb 9 19:32:13.860033 google-clock-skew[1268]: WARNING Failed to sync system time with hardware clock. Feb 9 19:32:13.867984 systemd-nspawn[1224]: useradd: invalid user name '0': use --badname to ignore Feb 9 19:32:13.868877 google-accounts[1267]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 9 19:32:13.877844 google-networking[1269]: INFO Starting Google Networking daemon. Feb 9 19:32:13.906963 sshd[1283]: Accepted publickey for core from 147.75.109.163 port 44952 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:32:13.908830 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:32:13.914924 systemd-logind[1135]: New session 2 of user core. Feb 9 19:32:13.915653 systemd[1]: Started session-2.scope. Feb 9 19:32:14.125633 sshd[1283]: pam_unix(sshd:session): session closed for user core Feb 9 19:32:14.130329 systemd[1]: sshd@1-10.128.0.33:22-147.75.109.163:44952.service: Deactivated successfully. Feb 9 19:32:14.131395 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:32:14.132231 systemd-logind[1135]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:32:14.133665 systemd-logind[1135]: Removed session 2. Feb 9 19:32:14.170917 systemd[1]: Started sshd@2-10.128.0.33:22-147.75.109.163:44956.service. Feb 9 19:32:14.453702 sshd[1305]: Accepted publickey for core from 147.75.109.163 port 44956 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:32:14.455948 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:32:14.462491 systemd[1]: Started session-3.scope. Feb 9 19:32:14.463319 systemd-logind[1135]: New session 3 of user core. Feb 9 19:32:14.661115 sshd[1305]: pam_unix(sshd:session): session closed for user core Feb 9 19:32:14.665114 systemd[1]: sshd@2-10.128.0.33:22-147.75.109.163:44956.service: Deactivated successfully. Feb 9 19:32:14.666126 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:32:14.667004 systemd-logind[1135]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:32:14.668276 systemd-logind[1135]: Removed session 3. Feb 9 19:32:14.707633 systemd[1]: Started sshd@3-10.128.0.33:22-147.75.109.163:52522.service. Feb 9 19:32:14.993498 sshd[1311]: Accepted publickey for core from 147.75.109.163 port 52522 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:32:14.995369 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:32:15.001295 systemd-logind[1135]: New session 4 of user core. Feb 9 19:32:15.001972 systemd[1]: Started session-4.scope. Feb 9 19:32:15.209521 sshd[1311]: pam_unix(sshd:session): session closed for user core Feb 9 19:32:15.213415 systemd[1]: sshd@3-10.128.0.33:22-147.75.109.163:52522.service: Deactivated successfully. Feb 9 19:32:15.214474 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:32:15.215805 systemd-logind[1135]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:32:15.216996 systemd-logind[1135]: Removed session 4. Feb 9 19:32:15.254633 systemd[1]: Started sshd@4-10.128.0.33:22-147.75.109.163:52532.service. Feb 9 19:32:15.534065 sshd[1317]: Accepted publickey for core from 147.75.109.163 port 52532 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:32:15.536044 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:32:15.542422 systemd[1]: Started session-5.scope. Feb 9 19:32:15.543218 systemd-logind[1135]: New session 5 of user core. Feb 9 19:32:15.728000 sudo[1320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:32:15.728393 sudo[1320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:32:16.308626 systemd[1]: Reloading. Feb 9 19:32:16.424147 /usr/lib/systemd/system-generators/torcx-generator[1353]: time="2024-02-09T19:32:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:32:16.440307 /usr/lib/systemd/system-generators/torcx-generator[1353]: time="2024-02-09T19:32:16Z" level=info msg="torcx already run" Feb 9 19:32:16.505574 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:32:16.505600 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:32:16.529034 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:32:16.669001 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:32:16.676968 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:32:16.677749 systemd[1]: Reached target network-online.target. Feb 9 19:32:16.680024 systemd[1]: Started kubelet.service. Feb 9 19:32:16.701215 systemd[1]: Starting coreos-metadata.service... Feb 9 19:32:16.780204 coreos-metadata[1401]: Feb 09 19:32:16.779 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 9 19:32:16.781813 coreos-metadata[1401]: Feb 09 19:32:16.781 INFO Fetch successful Feb 9 19:32:16.781813 coreos-metadata[1401]: Feb 09 19:32:16.781 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 9 19:32:16.782600 coreos-metadata[1401]: Feb 09 19:32:16.782 INFO Fetch successful Feb 9 19:32:16.782600 coreos-metadata[1401]: Feb 09 19:32:16.782 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 9 19:32:16.783212 coreos-metadata[1401]: Feb 09 19:32:16.783 INFO Fetch successful Feb 9 19:32:16.783212 coreos-metadata[1401]: Feb 09 19:32:16.783 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 9 19:32:16.786213 coreos-metadata[1401]: Feb 09 19:32:16.784 INFO Fetch successful Feb 9 19:32:16.786329 kubelet[1393]: E0209 19:32:16.786162 1393 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 19:32:16.789909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:32:16.790141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:32:16.797703 systemd[1]: Finished coreos-metadata.service. Feb 9 19:32:17.202652 systemd[1]: Stopped kubelet.service. Feb 9 19:32:17.231829 systemd[1]: Reloading. Feb 9 19:32:17.322789 /usr/lib/systemd/system-generators/torcx-generator[1457]: time="2024-02-09T19:32:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:32:17.322840 /usr/lib/systemd/system-generators/torcx-generator[1457]: time="2024-02-09T19:32:17Z" level=info msg="torcx already run" Feb 9 19:32:17.441376 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:32:17.441401 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:32:17.465279 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:32:17.589755 systemd[1]: Started kubelet.service. Feb 9 19:32:17.653584 kubelet[1500]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:32:17.653584 kubelet[1500]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:32:17.653584 kubelet[1500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:32:17.654229 kubelet[1500]: I0209 19:32:17.653651 1500 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:32:18.421432 kubelet[1500]: I0209 19:32:18.421379 1500 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 19:32:18.421432 kubelet[1500]: I0209 19:32:18.421416 1500 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:32:18.421754 kubelet[1500]: I0209 19:32:18.421708 1500 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 19:32:18.426885 kubelet[1500]: I0209 19:32:18.426851 1500 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:32:18.432910 kubelet[1500]: I0209 19:32:18.432864 1500 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:32:18.433256 kubelet[1500]: I0209 19:32:18.433224 1500 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:32:18.433345 kubelet[1500]: I0209 19:32:18.433322 1500 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:32:18.433345 kubelet[1500]: I0209 19:32:18.433344 1500 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:32:18.433575 kubelet[1500]: I0209 19:32:18.433364 1500 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 19:32:18.433575 kubelet[1500]: I0209 19:32:18.433486 1500 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:32:18.437494 kubelet[1500]: I0209 19:32:18.437468 1500 kubelet.go:405] "Attempting to sync node with API server" Feb 9 19:32:18.437623 kubelet[1500]: I0209 19:32:18.437505 1500 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:32:18.437623 kubelet[1500]: I0209 19:32:18.437535 1500 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:32:18.437623 kubelet[1500]: I0209 19:32:18.437555 1500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:32:18.438057 kubelet[1500]: E0209 19:32:18.438033 1500 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:18.438291 kubelet[1500]: E0209 19:32:18.438260 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:18.438613 kubelet[1500]: I0209 19:32:18.438592 1500 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:32:18.442153 kubelet[1500]: W0209 19:32:18.442110 1500 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:32:18.442788 kubelet[1500]: I0209 19:32:18.442755 1500 server.go:1168] "Started kubelet" Feb 9 19:32:18.443000 kubelet[1500]: I0209 19:32:18.442982 1500 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:32:18.444600 kubelet[1500]: I0209 19:32:18.444556 1500 server.go:461] "Adding debug handlers to kubelet server" Feb 9 19:32:18.446318 kubelet[1500]: I0209 19:32:18.443018 1500 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:32:18.454446 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:32:18.454673 kubelet[1500]: I0209 19:32:18.454648 1500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:32:18.455913 kubelet[1500]: E0209 19:32:18.455875 1500 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:32:18.455913 kubelet[1500]: E0209 19:32:18.455907 1500 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:32:18.460560 kubelet[1500]: E0209 19:32:18.460443 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21652a056", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 442731606, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 442731606, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:18.460734 kubelet[1500]: W0209 19:32:18.460701 1500 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.128.0.33" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:32:18.460734 kubelet[1500]: E0209 19:32:18.460732 1500 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.33" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:32:18.460888 kubelet[1500]: W0209 19:32:18.460833 1500 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:32:18.460888 kubelet[1500]: E0209 19:32:18.460852 1500 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:32:18.464579 kubelet[1500]: E0209 19:32:18.464475 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b2171b7b28", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 455894824, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 455894824, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:18.465454 kubelet[1500]: I0209 19:32:18.465434 1500 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 19:32:18.465949 kubelet[1500]: I0209 19:32:18.465929 1500 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 19:32:18.467791 kubelet[1500]: W0209 19:32:18.467771 1500 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:32:18.467970 kubelet[1500]: E0209 19:32:18.467952 1500 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:32:18.468150 kubelet[1500]: E0209 19:32:18.468081 1500 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.33\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 19:32:18.505524 kubelet[1500]: I0209 19:32:18.505495 1500 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:32:18.505722 kubelet[1500]: I0209 19:32:18.505706 1500 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:32:18.505851 kubelet[1500]: I0209 19:32:18.505839 1500 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:32:18.506727 kubelet[1500]: E0209 19:32:18.506579 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a002b15", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.33 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504436501, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504436501, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:18.509297 kubelet[1500]: E0209 19:32:18.509172 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a004626", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.33 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504443430, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504443430, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:18.511293 kubelet[1500]: E0209 19:32:18.511194 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a0065e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.33 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504451554, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504451554, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:18.514171 kubelet[1500]: I0209 19:32:18.514150 1500 policy_none.go:49] "None policy: Start" Feb 9 19:32:18.518535 kubelet[1500]: I0209 19:32:18.518515 1500 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:32:18.518669 kubelet[1500]: I0209 19:32:18.518655 1500 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:32:18.525684 systemd[1]: Created slice kubepods.slice. Feb 9 19:32:18.538999 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:32:18.545702 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:32:18.555372 kubelet[1500]: I0209 19:32:18.555345 1500 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:32:18.556081 kubelet[1500]: I0209 19:32:18.556058 1500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:32:18.559689 kubelet[1500]: E0209 19:32:18.559664 1500 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.33\" not found" Feb 9 19:32:18.561126 kubelet[1500]: E0209 19:32:18.561014 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21d454c44", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 559298628, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 559298628, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:18.567610 kubelet[1500]: I0209 19:32:18.567562 1500 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.33" Feb 9 19:32:18.569418 kubelet[1500]: E0209 19:32:18.569392 1500 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.33" Feb 9 19:32:18.569799 kubelet[1500]: E0209 19:32:18.569707 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a002b15", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.33 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504436501, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 567497677, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.33.17b248b21a002b15" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:18.571442 kubelet[1500]: E0209 19:32:18.571338 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a004626", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.33 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504443430, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 567504397, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.33.17b248b21a004626" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:18.572775 kubelet[1500]: E0209 19:32:18.572687 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a0065e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.33 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504451554, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 567508918, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.33.17b248b21a0065e2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:18.605682 kubelet[1500]: I0209 19:32:18.605634 1500 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:32:18.607751 kubelet[1500]: I0209 19:32:18.607721 1500 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:32:18.607751 kubelet[1500]: I0209 19:32:18.607753 1500 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 19:32:18.607957 kubelet[1500]: I0209 19:32:18.607778 1500 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 19:32:18.607957 kubelet[1500]: E0209 19:32:18.607850 1500 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:32:18.610204 kubelet[1500]: W0209 19:32:18.610167 1500 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:32:18.610346 kubelet[1500]: E0209 19:32:18.610233 1500 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:32:18.670668 kubelet[1500]: E0209 19:32:18.670617 1500 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.33\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 19:32:18.771855 kubelet[1500]: I0209 19:32:18.771812 1500 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.33" Feb 9 19:32:18.773804 kubelet[1500]: E0209 19:32:18.773772 1500 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.33" Feb 9 19:32:18.774108 kubelet[1500]: E0209 19:32:18.773991 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a002b15", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.33 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504436501, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 771757308, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.33.17b248b21a002b15" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:18.775651 kubelet[1500]: E0209 19:32:18.775554 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a004626", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.33 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504443430, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 771771894, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.33.17b248b21a004626" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:18.776876 kubelet[1500]: E0209 19:32:18.776798 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a0065e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.33 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504451554, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 771776288, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.33.17b248b21a0065e2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:19.073064 kubelet[1500]: E0209 19:32:19.072933 1500 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.33\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 19:32:19.175355 kubelet[1500]: I0209 19:32:19.175308 1500 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.33" Feb 9 19:32:19.177011 kubelet[1500]: E0209 19:32:19.176912 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a002b15", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.33 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504436501, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 19, 175233061, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.33.17b248b21a002b15" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:19.177281 kubelet[1500]: E0209 19:32:19.176995 1500 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.33" Feb 9 19:32:19.178119 kubelet[1500]: E0209 19:32:19.178034 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a004626", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.33 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504443430, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 19, 175265650, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.33.17b248b21a004626" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:19.179431 kubelet[1500]: E0209 19:32:19.179364 1500 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.33.17b248b21a0065e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.33", UID:"10.128.0.33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.33 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.33"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 32, 18, 504451554, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 32, 19, 175272350, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.33.17b248b21a0065e2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:32:19.269337 kubelet[1500]: W0209 19:32:19.269296 1500 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.128.0.33" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:32:19.269337 kubelet[1500]: E0209 19:32:19.269340 1500 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.33" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:32:19.424734 kubelet[1500]: I0209 19:32:19.424519 1500 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:32:19.439138 kubelet[1500]: E0209 19:32:19.439074 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:19.829676 kubelet[1500]: E0209 19:32:19.829627 1500 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.128.0.33" not found Feb 9 19:32:19.879919 kubelet[1500]: E0209 19:32:19.879854 1500 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.33\" not found" node="10.128.0.33" Feb 9 19:32:19.978740 kubelet[1500]: I0209 19:32:19.978688 1500 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.33" Feb 9 19:32:19.984851 kubelet[1500]: I0209 19:32:19.984808 1500 kubelet_node_status.go:73] "Successfully registered node" node="10.128.0.33" Feb 9 19:32:20.014084 kubelet[1500]: E0209 19:32:20.014036 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:20.113214 sudo[1320]: pam_unix(sudo:session): session closed for user root Feb 9 19:32:20.114504 kubelet[1500]: E0209 19:32:20.114457 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:20.157428 sshd[1317]: pam_unix(sshd:session): session closed for user core Feb 9 19:32:20.162899 systemd[1]: sshd@4-10.128.0.33:22-147.75.109.163:52532.service: Deactivated successfully. Feb 9 19:32:20.164360 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:32:20.165382 systemd-logind[1135]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:32:20.166905 systemd-logind[1135]: Removed session 5. Feb 9 19:32:20.215205 kubelet[1500]: E0209 19:32:20.215156 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:20.315826 kubelet[1500]: E0209 19:32:20.315742 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:20.416849 kubelet[1500]: E0209 19:32:20.416548 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:20.440201 kubelet[1500]: E0209 19:32:20.440115 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:20.516925 kubelet[1500]: E0209 19:32:20.516857 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:20.617989 kubelet[1500]: E0209 19:32:20.617932 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:20.718777 kubelet[1500]: E0209 19:32:20.718715 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:20.819576 kubelet[1500]: E0209 19:32:20.819507 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:20.920285 kubelet[1500]: E0209 19:32:20.920225 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:21.021500 kubelet[1500]: E0209 19:32:21.021345 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:21.122307 kubelet[1500]: E0209 19:32:21.122241 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:21.223090 kubelet[1500]: E0209 19:32:21.223027 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:21.323965 kubelet[1500]: E0209 19:32:21.323808 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:21.424821 kubelet[1500]: E0209 19:32:21.424751 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:21.441223 kubelet[1500]: E0209 19:32:21.441169 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:21.525782 kubelet[1500]: E0209 19:32:21.525723 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:21.626558 kubelet[1500]: E0209 19:32:21.626409 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:21.727129 kubelet[1500]: E0209 19:32:21.727070 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:21.827719 kubelet[1500]: E0209 19:32:21.827664 1500 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.128.0.33\" not found" Feb 9 19:32:21.928663 kubelet[1500]: I0209 19:32:21.928534 1500 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:32:21.929708 env[1148]: time="2024-02-09T19:32:21.929653897Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:32:21.930283 kubelet[1500]: I0209 19:32:21.930249 1500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:32:22.441501 kubelet[1500]: I0209 19:32:22.441298 1500 apiserver.go:52] "Watching apiserver" Feb 9 19:32:22.441501 kubelet[1500]: E0209 19:32:22.441319 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:22.444787 kubelet[1500]: I0209 19:32:22.444755 1500 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:32:22.444940 kubelet[1500]: I0209 19:32:22.444893 1500 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:32:22.453577 systemd[1]: Created slice kubepods-besteffort-pod1397e142_82da_481a_93d4_c062cc80af7c.slice. Feb 9 19:32:22.467728 kubelet[1500]: I0209 19:32:22.467297 1500 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 19:32:22.467356 systemd[1]: Created slice kubepods-burstable-podb548d086_3545_4ee1_817d_f8a48345378c.slice. Feb 9 19:32:22.493931 kubelet[1500]: I0209 19:32:22.493876 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b548d086-3545-4ee1-817d-f8a48345378c-clustermesh-secrets\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.494142 kubelet[1500]: I0209 19:32:22.494077 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b548d086-3545-4ee1-817d-f8a48345378c-cilium-config-path\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.494142 kubelet[1500]: I0209 19:32:22.494137 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-host-proc-sys-kernel\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.494320 kubelet[1500]: I0209 19:32:22.494229 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1397e142-82da-481a-93d4-c062cc80af7c-kube-proxy\") pod \"kube-proxy-9zgx6\" (UID: \"1397e142-82da-481a-93d4-c062cc80af7c\") " pod="kube-system/kube-proxy-9zgx6" Feb 9 19:32:22.494320 kubelet[1500]: I0209 19:32:22.494298 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgf8h\" (UniqueName: \"kubernetes.io/projected/1397e142-82da-481a-93d4-c062cc80af7c-kube-api-access-pgf8h\") pod \"kube-proxy-9zgx6\" (UID: \"1397e142-82da-481a-93d4-c062cc80af7c\") " pod="kube-system/kube-proxy-9zgx6" Feb 9 19:32:22.494461 kubelet[1500]: I0209 19:32:22.494333 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-bpf-maps\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.494461 kubelet[1500]: I0209 19:32:22.494413 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-lib-modules\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.494601 kubelet[1500]: I0209 19:32:22.494486 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cilium-cgroup\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.494601 kubelet[1500]: I0209 19:32:22.494566 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-xtables-lock\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.494711 kubelet[1500]: I0209 19:32:22.494649 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b548d086-3545-4ee1-817d-f8a48345378c-hubble-tls\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.494776 kubelet[1500]: I0209 19:32:22.494712 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smmwx\" (UniqueName: \"kubernetes.io/projected/b548d086-3545-4ee1-817d-f8a48345378c-kube-api-access-smmwx\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.494841 kubelet[1500]: I0209 19:32:22.494748 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1397e142-82da-481a-93d4-c062cc80af7c-xtables-lock\") pod \"kube-proxy-9zgx6\" (UID: \"1397e142-82da-481a-93d4-c062cc80af7c\") " pod="kube-system/kube-proxy-9zgx6" Feb 9 19:32:22.494926 kubelet[1500]: I0209 19:32:22.494902 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1397e142-82da-481a-93d4-c062cc80af7c-lib-modules\") pod \"kube-proxy-9zgx6\" (UID: \"1397e142-82da-481a-93d4-c062cc80af7c\") " pod="kube-system/kube-proxy-9zgx6" Feb 9 19:32:22.495041 kubelet[1500]: I0209 19:32:22.494991 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cilium-run\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.495123 kubelet[1500]: I0209 19:32:22.495065 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cni-path\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.495203 kubelet[1500]: I0209 19:32:22.495132 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-host-proc-sys-net\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.495290 kubelet[1500]: I0209 19:32:22.495270 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-hostproc\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.495383 kubelet[1500]: I0209 19:32:22.495368 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-etc-cni-netd\") pod \"cilium-pnt84\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " pod="kube-system/cilium-pnt84" Feb 9 19:32:22.495448 kubelet[1500]: I0209 19:32:22.495395 1500 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:32:22.766647 env[1148]: time="2024-02-09T19:32:22.766583836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zgx6,Uid:1397e142-82da-481a-93d4-c062cc80af7c,Namespace:kube-system,Attempt:0,}" Feb 9 19:32:22.777504 env[1148]: time="2024-02-09T19:32:22.777446751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pnt84,Uid:b548d086-3545-4ee1-817d-f8a48345378c,Namespace:kube-system,Attempt:0,}" Feb 9 19:32:23.274885 env[1148]: time="2024-02-09T19:32:23.274832937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:23.276271 env[1148]: time="2024-02-09T19:32:23.276217738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:23.279260 env[1148]: time="2024-02-09T19:32:23.279213650Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:23.281802 env[1148]: time="2024-02-09T19:32:23.281749119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:23.282687 env[1148]: time="2024-02-09T19:32:23.282653396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:23.285051 env[1148]: time="2024-02-09T19:32:23.285001098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:23.285894 env[1148]: time="2024-02-09T19:32:23.285839228Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:23.288460 env[1148]: time="2024-02-09T19:32:23.288410571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:23.313613 env[1148]: time="2024-02-09T19:32:23.313314199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:32:23.313613 env[1148]: time="2024-02-09T19:32:23.313382336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:32:23.313613 env[1148]: time="2024-02-09T19:32:23.313403978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:32:23.315381 env[1148]: time="2024-02-09T19:32:23.315324712Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8 pid=1552 runtime=io.containerd.runc.v2 Feb 9 19:32:23.317662 env[1148]: time="2024-02-09T19:32:23.317546755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:32:23.317834 env[1148]: time="2024-02-09T19:32:23.317640705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:32:23.317834 env[1148]: time="2024-02-09T19:32:23.317659411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:32:23.320472 env[1148]: time="2024-02-09T19:32:23.318508044Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12b833f99d83a8082ada0b56105e230aaa1719471d8d63ec5a41e89e886e5867 pid=1562 runtime=io.containerd.runc.v2 Feb 9 19:32:23.335612 systemd[1]: Started cri-containerd-bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8.scope. Feb 9 19:32:23.353484 systemd[1]: Started cri-containerd-12b833f99d83a8082ada0b56105e230aaa1719471d8d63ec5a41e89e886e5867.scope. Feb 9 19:32:23.405982 env[1148]: time="2024-02-09T19:32:23.405921295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pnt84,Uid:b548d086-3545-4ee1-817d-f8a48345378c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\"" Feb 9 19:32:23.411572 kubelet[1500]: E0209 19:32:23.411507 1500 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Feb 9 19:32:23.412093 env[1148]: time="2024-02-09T19:32:23.412040343Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:32:23.418131 env[1148]: time="2024-02-09T19:32:23.418079924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zgx6,Uid:1397e142-82da-481a-93d4-c062cc80af7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"12b833f99d83a8082ada0b56105e230aaa1719471d8d63ec5a41e89e886e5867\"" Feb 9 19:32:23.441769 kubelet[1500]: E0209 19:32:23.441711 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:23.612709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311326493.mount: Deactivated successfully. Feb 9 19:32:24.442466 kubelet[1500]: E0209 19:32:24.442384 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:25.443262 kubelet[1500]: E0209 19:32:25.443210 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:26.444172 kubelet[1500]: E0209 19:32:26.444101 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:27.444361 kubelet[1500]: E0209 19:32:27.444273 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:28.444784 kubelet[1500]: E0209 19:32:28.444688 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:28.592693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260723751.mount: Deactivated successfully. Feb 9 19:32:29.445778 kubelet[1500]: E0209 19:32:29.445713 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:30.446414 kubelet[1500]: E0209 19:32:30.446320 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:31.446671 kubelet[1500]: E0209 19:32:31.446606 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:31.840426 env[1148]: time="2024-02-09T19:32:31.840351691Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:31.842981 env[1148]: time="2024-02-09T19:32:31.842921470Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:31.845528 env[1148]: time="2024-02-09T19:32:31.845485244Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:31.846296 env[1148]: time="2024-02-09T19:32:31.846252389Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:32:31.848195 env[1148]: time="2024-02-09T19:32:31.848128223Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 19:32:31.850089 env[1148]: time="2024-02-09T19:32:31.850038354Z" level=info msg="CreateContainer within sandbox \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:32:31.869614 env[1148]: time="2024-02-09T19:32:31.869549388Z" level=info msg="CreateContainer within sandbox \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e\"" Feb 9 19:32:31.870602 env[1148]: time="2024-02-09T19:32:31.870552556Z" level=info msg="StartContainer for \"9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e\"" Feb 9 19:32:31.904329 systemd[1]: Started cri-containerd-9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e.scope. Feb 9 19:32:31.953250 env[1148]: time="2024-02-09T19:32:31.952449427Z" level=info msg="StartContainer for \"9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e\" returns successfully" Feb 9 19:32:31.965807 systemd[1]: cri-containerd-9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e.scope: Deactivated successfully. Feb 9 19:32:32.446950 kubelet[1500]: E0209 19:32:32.446910 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:32.861073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e-rootfs.mount: Deactivated successfully. Feb 9 19:32:33.447328 kubelet[1500]: E0209 19:32:33.447260 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:33.791990 env[1148]: time="2024-02-09T19:32:33.791910094Z" level=info msg="shim disconnected" id=9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e Feb 9 19:32:33.791990 env[1148]: time="2024-02-09T19:32:33.791966798Z" level=warning msg="cleaning up after shim disconnected" id=9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e namespace=k8s.io Feb 9 19:32:33.791990 env[1148]: time="2024-02-09T19:32:33.791981885Z" level=info msg="cleaning up dead shim" Feb 9 19:32:33.804614 env[1148]: time="2024-02-09T19:32:33.804535991Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:32:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1678 runtime=io.containerd.runc.v2\n" Feb 9 19:32:33.973327 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:32:34.448033 kubelet[1500]: E0209 19:32:34.447970 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:34.499745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2328697397.mount: Deactivated successfully. Feb 9 19:32:34.651303 env[1148]: time="2024-02-09T19:32:34.651217296Z" level=info msg="CreateContainer within sandbox \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:32:34.684922 env[1148]: time="2024-02-09T19:32:34.684854948Z" level=info msg="CreateContainer within sandbox \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494\"" Feb 9 19:32:34.686012 env[1148]: time="2024-02-09T19:32:34.685971465Z" level=info msg="StartContainer for \"3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494\"" Feb 9 19:32:34.717959 systemd[1]: Started cri-containerd-3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494.scope. Feb 9 19:32:34.768558 env[1148]: time="2024-02-09T19:32:34.768498817Z" level=info msg="StartContainer for \"3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494\" returns successfully" Feb 9 19:32:34.786396 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:32:34.786767 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:32:34.787500 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:32:34.791051 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:32:34.801023 systemd[1]: cri-containerd-3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494.scope: Deactivated successfully. Feb 9 19:32:34.809307 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:32:34.943019 env[1148]: time="2024-02-09T19:32:34.942958030Z" level=info msg="shim disconnected" id=3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494 Feb 9 19:32:34.944127 env[1148]: time="2024-02-09T19:32:34.944091834Z" level=warning msg="cleaning up after shim disconnected" id=3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494 namespace=k8s.io Feb 9 19:32:34.944327 env[1148]: time="2024-02-09T19:32:34.944301616Z" level=info msg="cleaning up dead shim" Feb 9 19:32:34.961207 env[1148]: time="2024-02-09T19:32:34.961141857Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:32:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1745 runtime=io.containerd.runc.v2\n" Feb 9 19:32:35.032260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494-rootfs.mount: Deactivated successfully. Feb 9 19:32:35.324104 env[1148]: time="2024-02-09T19:32:35.323941918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:35.326816 env[1148]: time="2024-02-09T19:32:35.326769652Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:35.329399 env[1148]: time="2024-02-09T19:32:35.329353496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:35.331826 env[1148]: time="2024-02-09T19:32:35.331782684Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:35.332481 env[1148]: time="2024-02-09T19:32:35.332431344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 9 19:32:35.335144 env[1148]: time="2024-02-09T19:32:35.335095240Z" level=info msg="CreateContainer within sandbox \"12b833f99d83a8082ada0b56105e230aaa1719471d8d63ec5a41e89e886e5867\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:32:35.352659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726672231.mount: Deactivated successfully. Feb 9 19:32:35.360919 env[1148]: time="2024-02-09T19:32:35.360861546Z" level=info msg="CreateContainer within sandbox \"12b833f99d83a8082ada0b56105e230aaa1719471d8d63ec5a41e89e886e5867\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"48d36c3f25e718f85e0420a887d688d7dcb8d268600294061b1c6f6ca3c957e2\"" Feb 9 19:32:35.361730 env[1148]: time="2024-02-09T19:32:35.361682849Z" level=info msg="StartContainer for \"48d36c3f25e718f85e0420a887d688d7dcb8d268600294061b1c6f6ca3c957e2\"" Feb 9 19:32:35.389312 systemd[1]: Started cri-containerd-48d36c3f25e718f85e0420a887d688d7dcb8d268600294061b1c6f6ca3c957e2.scope. Feb 9 19:32:35.436448 env[1148]: time="2024-02-09T19:32:35.436386286Z" level=info msg="StartContainer for \"48d36c3f25e718f85e0420a887d688d7dcb8d268600294061b1c6f6ca3c957e2\" returns successfully" Feb 9 19:32:35.450007 kubelet[1500]: E0209 19:32:35.449956 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:35.657639 env[1148]: time="2024-02-09T19:32:35.656773549Z" level=info msg="CreateContainer within sandbox \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:32:35.685628 env[1148]: time="2024-02-09T19:32:35.685572098Z" level=info msg="CreateContainer within sandbox \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223\"" Feb 9 19:32:35.687295 env[1148]: time="2024-02-09T19:32:35.687255063Z" level=info msg="StartContainer for \"3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223\"" Feb 9 19:32:35.728638 systemd[1]: Started cri-containerd-3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223.scope. Feb 9 19:32:35.785680 systemd[1]: cri-containerd-3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223.scope: Deactivated successfully. Feb 9 19:32:35.793164 env[1148]: time="2024-02-09T19:32:35.792798035Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb548d086_3545_4ee1_817d_f8a48345378c.slice/cri-containerd-3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223.scope/memory.events\": no such file or directory" Feb 9 19:32:35.795037 env[1148]: time="2024-02-09T19:32:35.794975344Z" level=info msg="StartContainer for \"3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223\" returns successfully" Feb 9 19:32:35.917830 env[1148]: time="2024-02-09T19:32:35.917690396Z" level=info msg="shim disconnected" id=3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223 Feb 9 19:32:35.917830 env[1148]: time="2024-02-09T19:32:35.917761803Z" level=warning msg="cleaning up after shim disconnected" id=3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223 namespace=k8s.io Feb 9 19:32:35.917830 env[1148]: time="2024-02-09T19:32:35.917777099Z" level=info msg="cleaning up dead shim" Feb 9 19:32:35.929796 env[1148]: time="2024-02-09T19:32:35.929722004Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:32:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1960 runtime=io.containerd.runc.v2\n" Feb 9 19:32:36.450720 kubelet[1500]: E0209 19:32:36.450661 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:36.672245 env[1148]: time="2024-02-09T19:32:36.672160871Z" level=info msg="CreateContainer within sandbox \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:32:36.693091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657970183.mount: Deactivated successfully. Feb 9 19:32:36.701742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441657364.mount: Deactivated successfully. Feb 9 19:32:36.703097 kubelet[1500]: I0209 19:32:36.703050 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9zgx6" podStartSLOduration=4.789632565 podCreationTimestamp="2024-02-09 19:32:20 +0000 UTC" firstStartedPulling="2024-02-09 19:32:23.419462388 +0000 UTC m=+5.824318207" lastFinishedPulling="2024-02-09 19:32:35.332830118 +0000 UTC m=+17.737685919" observedRunningTime="2024-02-09 19:32:35.687094412 +0000 UTC m=+18.091950236" watchObservedRunningTime="2024-02-09 19:32:36.703000277 +0000 UTC m=+19.107856104" Feb 9 19:32:36.707048 env[1148]: time="2024-02-09T19:32:36.706997544Z" level=info msg="CreateContainer within sandbox \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f\"" Feb 9 19:32:36.707682 env[1148]: time="2024-02-09T19:32:36.707647426Z" level=info msg="StartContainer for \"4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f\"" Feb 9 19:32:36.731070 systemd[1]: Started cri-containerd-4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f.scope. Feb 9 19:32:36.776484 systemd[1]: cri-containerd-4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f.scope: Deactivated successfully. Feb 9 19:32:36.779562 env[1148]: time="2024-02-09T19:32:36.779218338Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb548d086_3545_4ee1_817d_f8a48345378c.slice/cri-containerd-4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f.scope/memory.events\": no such file or directory" Feb 9 19:32:36.782572 env[1148]: time="2024-02-09T19:32:36.782522058Z" level=info msg="StartContainer for \"4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f\" returns successfully" Feb 9 19:32:36.809603 env[1148]: time="2024-02-09T19:32:36.809530662Z" level=info msg="shim disconnected" id=4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f Feb 9 19:32:36.809603 env[1148]: time="2024-02-09T19:32:36.809599278Z" level=warning msg="cleaning up after shim disconnected" id=4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f namespace=k8s.io Feb 9 19:32:36.809945 env[1148]: time="2024-02-09T19:32:36.809615376Z" level=info msg="cleaning up dead shim" Feb 9 19:32:36.821685 env[1148]: time="2024-02-09T19:32:36.821635561Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:32:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2018 runtime=io.containerd.runc.v2\n" Feb 9 19:32:37.451740 kubelet[1500]: E0209 19:32:37.451673 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:37.676434 env[1148]: time="2024-02-09T19:32:37.676377262Z" level=info msg="CreateContainer within sandbox \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:32:37.694783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655991729.mount: Deactivated successfully. Feb 9 19:32:37.706637 env[1148]: time="2024-02-09T19:32:37.706293416Z" level=info msg="CreateContainer within sandbox \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\"" Feb 9 19:32:37.707541 env[1148]: time="2024-02-09T19:32:37.707483701Z" level=info msg="StartContainer for \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\"" Feb 9 19:32:37.733266 systemd[1]: Started cri-containerd-102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418.scope. Feb 9 19:32:37.784742 env[1148]: time="2024-02-09T19:32:37.784637639Z" level=info msg="StartContainer for \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\" returns successfully" Feb 9 19:32:37.932254 kubelet[1500]: I0209 19:32:37.932217 1500 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:32:38.292230 kernel: Initializing XFRM netlink socket Feb 9 19:32:38.437719 kubelet[1500]: E0209 19:32:38.437674 1500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:38.451859 kubelet[1500]: E0209 19:32:38.451802 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:38.700148 kubelet[1500]: I0209 19:32:38.699997 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pnt84" podStartSLOduration=10.261091175 podCreationTimestamp="2024-02-09 19:32:20 +0000 UTC" firstStartedPulling="2024-02-09 19:32:23.408076144 +0000 UTC m=+5.812931957" lastFinishedPulling="2024-02-09 19:32:31.846929527 +0000 UTC m=+14.251785341" observedRunningTime="2024-02-09 19:32:38.699498993 +0000 UTC m=+21.104354820" watchObservedRunningTime="2024-02-09 19:32:38.699944559 +0000 UTC m=+21.104800383" Feb 9 19:32:39.452353 kubelet[1500]: E0209 19:32:39.452286 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:39.949125 systemd-networkd[1029]: cilium_host: Link UP Feb 9 19:32:39.949353 systemd-networkd[1029]: cilium_net: Link UP Feb 9 19:32:39.949359 systemd-networkd[1029]: cilium_net: Gained carrier Feb 9 19:32:39.956322 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:32:39.957000 systemd-networkd[1029]: cilium_host: Gained carrier Feb 9 19:32:39.964690 systemd-networkd[1029]: cilium_net: Gained IPv6LL Feb 9 19:32:40.108243 systemd-networkd[1029]: cilium_vxlan: Link UP Feb 9 19:32:40.108259 systemd-networkd[1029]: cilium_vxlan: Gained carrier Feb 9 19:32:40.384237 kernel: NET: Registered PF_ALG protocol family Feb 9 19:32:40.452691 kubelet[1500]: E0209 19:32:40.452618 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:40.640562 systemd-networkd[1029]: cilium_host: Gained IPv6LL Feb 9 19:32:41.210705 systemd-networkd[1029]: lxc_health: Link UP Feb 9 19:32:41.224214 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:32:41.227010 systemd-networkd[1029]: lxc_health: Gained carrier Feb 9 19:32:41.452980 kubelet[1500]: E0209 19:32:41.452897 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:41.856881 systemd-networkd[1029]: cilium_vxlan: Gained IPv6LL Feb 9 19:32:42.368818 systemd-networkd[1029]: lxc_health: Gained IPv6LL Feb 9 19:32:42.453867 kubelet[1500]: E0209 19:32:42.453800 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:43.454364 kubelet[1500]: E0209 19:32:43.454314 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:44.456202 kubelet[1500]: E0209 19:32:44.456136 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:44.894134 kubelet[1500]: I0209 19:32:44.894087 1500 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:32:44.902841 systemd[1]: Created slice kubepods-besteffort-pod7d5c6d8f_1cf9_466e_924f_8bd7e88f9654.slice. Feb 9 19:32:44.944097 kubelet[1500]: I0209 19:32:44.944034 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tdcq\" (UniqueName: \"kubernetes.io/projected/7d5c6d8f-1cf9-466e-924f-8bd7e88f9654-kube-api-access-7tdcq\") pod \"nginx-deployment-845c78c8b9-pbxq9\" (UID: \"7d5c6d8f-1cf9-466e-924f-8bd7e88f9654\") " pod="default/nginx-deployment-845c78c8b9-pbxq9" Feb 9 19:32:45.208596 env[1148]: time="2024-02-09T19:32:45.207244313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-pbxq9,Uid:7d5c6d8f-1cf9-466e-924f-8bd7e88f9654,Namespace:default,Attempt:0,}" Feb 9 19:32:45.279981 systemd-networkd[1029]: lxc77653a311721: Link UP Feb 9 19:32:45.296211 kernel: eth0: renamed from tmpced0b Feb 9 19:32:45.296358 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:32:45.308213 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc77653a311721: link becomes ready Feb 9 19:32:45.324794 systemd-networkd[1029]: lxc77653a311721: Gained carrier Feb 9 19:32:45.458018 kubelet[1500]: E0209 19:32:45.457956 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:46.459173 kubelet[1500]: E0209 19:32:46.459133 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:46.592984 systemd-networkd[1029]: lxc77653a311721: Gained IPv6LL Feb 9 19:32:46.652274 env[1148]: time="2024-02-09T19:32:46.652157376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:32:46.652880 env[1148]: time="2024-02-09T19:32:46.652242235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:32:46.652880 env[1148]: time="2024-02-09T19:32:46.652261868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:32:46.652880 env[1148]: time="2024-02-09T19:32:46.652449179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ced0be1906fef70f27c6500897857e6d9bdeed674f5bf09656d478bc3d502948 pid=2536 runtime=io.containerd.runc.v2 Feb 9 19:32:46.681138 systemd[1]: run-containerd-runc-k8s.io-ced0be1906fef70f27c6500897857e6d9bdeed674f5bf09656d478bc3d502948-runc.p5lCcr.mount: Deactivated successfully. Feb 9 19:32:46.686595 systemd[1]: Started cri-containerd-ced0be1906fef70f27c6500897857e6d9bdeed674f5bf09656d478bc3d502948.scope. Feb 9 19:32:46.744091 env[1148]: time="2024-02-09T19:32:46.744035932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-pbxq9,Uid:7d5c6d8f-1cf9-466e-924f-8bd7e88f9654,Namespace:default,Attempt:0,} returns sandbox id \"ced0be1906fef70f27c6500897857e6d9bdeed674f5bf09656d478bc3d502948\"" Feb 9 19:32:46.746578 env[1148]: time="2024-02-09T19:32:46.746479110Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:32:47.460446 kubelet[1500]: E0209 19:32:47.460377 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:48.460681 kubelet[1500]: E0209 19:32:48.460604 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:48.811218 update_engine[1138]: I0209 19:32:48.810747 1138 update_attempter.cc:509] Updating boot flags... Feb 9 19:32:49.461213 kubelet[1500]: E0209 19:32:49.461105 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:49.928939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount614271689.mount: Deactivated successfully. Feb 9 19:32:50.461682 kubelet[1500]: E0209 19:32:50.461596 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:51.011075 env[1148]: time="2024-02-09T19:32:51.010999163Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:51.014143 env[1148]: time="2024-02-09T19:32:51.014096020Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:51.016467 env[1148]: time="2024-02-09T19:32:51.016428312Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:51.018901 env[1148]: time="2024-02-09T19:32:51.018859141Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:32:51.019803 env[1148]: time="2024-02-09T19:32:51.019745728Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:32:51.022822 env[1148]: time="2024-02-09T19:32:51.022781355Z" level=info msg="CreateContainer within sandbox \"ced0be1906fef70f27c6500897857e6d9bdeed674f5bf09656d478bc3d502948\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:32:51.038838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2903793504.mount: Deactivated successfully. Feb 9 19:32:51.048132 env[1148]: time="2024-02-09T19:32:51.048068743Z" level=info msg="CreateContainer within sandbox \"ced0be1906fef70f27c6500897857e6d9bdeed674f5bf09656d478bc3d502948\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"23c499c042df964697208738a9505b0d5140ad9902605297dabeec311a8a5e07\"" Feb 9 19:32:51.049012 env[1148]: time="2024-02-09T19:32:51.048950716Z" level=info msg="StartContainer for \"23c499c042df964697208738a9505b0d5140ad9902605297dabeec311a8a5e07\"" Feb 9 19:32:51.078734 systemd[1]: Started cri-containerd-23c499c042df964697208738a9505b0d5140ad9902605297dabeec311a8a5e07.scope. Feb 9 19:32:51.123285 env[1148]: time="2024-02-09T19:32:51.123218035Z" level=info msg="StartContainer for \"23c499c042df964697208738a9505b0d5140ad9902605297dabeec311a8a5e07\" returns successfully" Feb 9 19:32:51.462802 kubelet[1500]: E0209 19:32:51.462617 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:51.723230 kubelet[1500]: I0209 19:32:51.723174 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-pbxq9" podStartSLOduration=3.448781989 podCreationTimestamp="2024-02-09 19:32:44 +0000 UTC" firstStartedPulling="2024-02-09 19:32:46.745868928 +0000 UTC m=+29.150724730" lastFinishedPulling="2024-02-09 19:32:51.020214751 +0000 UTC m=+33.425070554" observedRunningTime="2024-02-09 19:32:51.722632589 +0000 UTC m=+34.127488413" watchObservedRunningTime="2024-02-09 19:32:51.723127813 +0000 UTC m=+34.127983635" Feb 9 19:32:52.462934 kubelet[1500]: E0209 19:32:52.462855 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:53.463139 kubelet[1500]: E0209 19:32:53.463071 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:54.464316 kubelet[1500]: E0209 19:32:54.464215 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:55.464770 kubelet[1500]: E0209 19:32:55.464694 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:56.465023 kubelet[1500]: E0209 19:32:56.464951 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:57.466222 kubelet[1500]: E0209 19:32:57.466159 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:58.437690 kubelet[1500]: E0209 19:32:58.437619 1500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:58.466922 kubelet[1500]: E0209 19:32:58.466857 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:59.096247 kubelet[1500]: I0209 19:32:59.096177 1500 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:32:59.103423 systemd[1]: Created slice kubepods-besteffort-pod6868c84a_6246_4c8f_b787_e712ad444bd2.slice. Feb 9 19:32:59.141218 kubelet[1500]: I0209 19:32:59.141153 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtnfh\" (UniqueName: \"kubernetes.io/projected/6868c84a-6246-4c8f-b787-e712ad444bd2-kube-api-access-dtnfh\") pod \"nfs-server-provisioner-0\" (UID: \"6868c84a-6246-4c8f-b787-e712ad444bd2\") " pod="default/nfs-server-provisioner-0" Feb 9 19:32:59.141218 kubelet[1500]: I0209 19:32:59.141224 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6868c84a-6246-4c8f-b787-e712ad444bd2-data\") pod \"nfs-server-provisioner-0\" (UID: \"6868c84a-6246-4c8f-b787-e712ad444bd2\") " pod="default/nfs-server-provisioner-0" Feb 9 19:32:59.409241 env[1148]: time="2024-02-09T19:32:59.408685428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6868c84a-6246-4c8f-b787-e712ad444bd2,Namespace:default,Attempt:0,}" Feb 9 19:32:59.453566 systemd-networkd[1029]: lxc5ec13c849f79: Link UP Feb 9 19:32:59.468134 kernel: eth0: renamed from tmpe743f Feb 9 19:32:59.468290 kubelet[1500]: E0209 19:32:59.468053 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:32:59.494627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:32:59.494780 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5ec13c849f79: link becomes ready Feb 9 19:32:59.495445 systemd-networkd[1029]: lxc5ec13c849f79: Gained carrier Feb 9 19:32:59.737987 env[1148]: time="2024-02-09T19:32:59.737900194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:32:59.738302 env[1148]: time="2024-02-09T19:32:59.737954717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:32:59.738302 env[1148]: time="2024-02-09T19:32:59.737972778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:32:59.738302 env[1148]: time="2024-02-09T19:32:59.738239379Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e743fbe86f4dc6954da3e840d4ac198de6fed1260760f52ceb9783f65e2f9476 pid=2679 runtime=io.containerd.runc.v2 Feb 9 19:32:59.769426 systemd[1]: Started cri-containerd-e743fbe86f4dc6954da3e840d4ac198de6fed1260760f52ceb9783f65e2f9476.scope. Feb 9 19:32:59.827708 env[1148]: time="2024-02-09T19:32:59.827658487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6868c84a-6246-4c8f-b787-e712ad444bd2,Namespace:default,Attempt:0,} returns sandbox id \"e743fbe86f4dc6954da3e840d4ac198de6fed1260760f52ceb9783f65e2f9476\"" Feb 9 19:32:59.830500 env[1148]: time="2024-02-09T19:32:59.830460024Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:33:00.259613 systemd[1]: run-containerd-runc-k8s.io-e743fbe86f4dc6954da3e840d4ac198de6fed1260760f52ceb9783f65e2f9476-runc.WwFkbz.mount: Deactivated successfully. Feb 9 19:33:00.468818 kubelet[1500]: E0209 19:33:00.468752 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:01.120506 systemd-networkd[1029]: lxc5ec13c849f79: Gained IPv6LL Feb 9 19:33:01.469435 kubelet[1500]: E0209 19:33:01.469366 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:02.460886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4216868028.mount: Deactivated successfully. Feb 9 19:33:02.470062 kubelet[1500]: E0209 19:33:02.470012 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:03.471015 kubelet[1500]: E0209 19:33:03.470959 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:04.472194 kubelet[1500]: E0209 19:33:04.472091 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:04.865818 env[1148]: time="2024-02-09T19:33:04.865631364Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:33:04.868883 env[1148]: time="2024-02-09T19:33:04.868817362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:33:04.872542 env[1148]: time="2024-02-09T19:33:04.872482335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:33:04.875604 env[1148]: time="2024-02-09T19:33:04.875539391Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:33:04.876712 env[1148]: time="2024-02-09T19:33:04.876665229Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:33:04.880172 env[1148]: time="2024-02-09T19:33:04.880113120Z" level=info msg="CreateContainer within sandbox \"e743fbe86f4dc6954da3e840d4ac198de6fed1260760f52ceb9783f65e2f9476\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:33:04.894884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135996733.mount: Deactivated successfully. Feb 9 19:33:04.904862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556776508.mount: Deactivated successfully. Feb 9 19:33:04.909011 env[1148]: time="2024-02-09T19:33:04.908947250Z" level=info msg="CreateContainer within sandbox \"e743fbe86f4dc6954da3e840d4ac198de6fed1260760f52ceb9783f65e2f9476\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"e8bb4863f27fef95bbe1962c79f0aacf54a1d35be9a66bf43dcf1211e1134401\"" Feb 9 19:33:04.909953 env[1148]: time="2024-02-09T19:33:04.909909656Z" level=info msg="StartContainer for \"e8bb4863f27fef95bbe1962c79f0aacf54a1d35be9a66bf43dcf1211e1134401\"" Feb 9 19:33:04.939078 systemd[1]: Started cri-containerd-e8bb4863f27fef95bbe1962c79f0aacf54a1d35be9a66bf43dcf1211e1134401.scope. Feb 9 19:33:04.990017 env[1148]: time="2024-02-09T19:33:04.989948991Z" level=info msg="StartContainer for \"e8bb4863f27fef95bbe1962c79f0aacf54a1d35be9a66bf43dcf1211e1134401\" returns successfully" Feb 9 19:33:05.472897 kubelet[1500]: E0209 19:33:05.472827 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:05.754530 kubelet[1500]: I0209 19:33:05.754281 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.706828625 podCreationTimestamp="2024-02-09 19:32:59 +0000 UTC" firstStartedPulling="2024-02-09 19:32:59.829676732 +0000 UTC m=+42.234532540" lastFinishedPulling="2024-02-09 19:33:04.87707698 +0000 UTC m=+47.281932782" observedRunningTime="2024-02-09 19:33:05.7537273 +0000 UTC m=+48.158583123" watchObservedRunningTime="2024-02-09 19:33:05.754228867 +0000 UTC m=+48.159084695" Feb 9 19:33:06.473553 kubelet[1500]: E0209 19:33:06.473462 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:07.474311 kubelet[1500]: E0209 19:33:07.474210 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:08.474973 kubelet[1500]: E0209 19:33:08.474897 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:09.475696 kubelet[1500]: E0209 19:33:09.475617 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:10.476692 kubelet[1500]: E0209 19:33:10.476634 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:11.476892 kubelet[1500]: E0209 19:33:11.476815 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:12.477813 kubelet[1500]: E0209 19:33:12.477731 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:13.477972 kubelet[1500]: E0209 19:33:13.477896 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:14.478521 kubelet[1500]: E0209 19:33:14.478457 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:14.939056 kubelet[1500]: I0209 19:33:14.938640 1500 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:33:14.946524 systemd[1]: Created slice kubepods-besteffort-pod19374c7d_e5a9_4ccd_866f_c9512598d11d.slice. Feb 9 19:33:15.040157 kubelet[1500]: I0209 19:33:15.040106 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjkzp\" (UniqueName: \"kubernetes.io/projected/19374c7d-e5a9-4ccd-866f-c9512598d11d-kube-api-access-vjkzp\") pod \"test-pod-1\" (UID: \"19374c7d-e5a9-4ccd-866f-c9512598d11d\") " pod="default/test-pod-1" Feb 9 19:33:15.040157 kubelet[1500]: I0209 19:33:15.040172 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28117db7-087b-4d59-b890-ff2da31ed7a1\" (UniqueName: \"kubernetes.io/nfs/19374c7d-e5a9-4ccd-866f-c9512598d11d-pvc-28117db7-087b-4d59-b890-ff2da31ed7a1\") pod \"test-pod-1\" (UID: \"19374c7d-e5a9-4ccd-866f-c9512598d11d\") " pod="default/test-pod-1" Feb 9 19:33:15.183244 kernel: FS-Cache: Loaded Feb 9 19:33:15.238267 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:33:15.238452 kernel: RPC: Registered udp transport module. Feb 9 19:33:15.238494 kernel: RPC: Registered tcp transport module. Feb 9 19:33:15.243106 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:33:15.307213 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:33:15.479678 kubelet[1500]: E0209 19:33:15.479633 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:15.541487 kernel: NFS: Registering the id_resolver key type Feb 9 19:33:15.541671 kernel: Key type id_resolver registered Feb 9 19:33:15.541726 kernel: Key type id_legacy registered Feb 9 19:33:15.593828 nfsidmap[2798]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Feb 9 19:33:15.605044 nfsidmap[2799]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Feb 9 19:33:15.850932 env[1148]: time="2024-02-09T19:33:15.850742410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:19374c7d-e5a9-4ccd-866f-c9512598d11d,Namespace:default,Attempt:0,}" Feb 9 19:33:15.896867 systemd-networkd[1029]: lxc8b3017c9dea3: Link UP Feb 9 19:33:15.906300 kernel: eth0: renamed from tmp69814 Feb 9 19:33:15.931238 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:33:15.931380 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8b3017c9dea3: link becomes ready Feb 9 19:33:15.931611 systemd-networkd[1029]: lxc8b3017c9dea3: Gained carrier Feb 9 19:33:16.220966 env[1148]: time="2024-02-09T19:33:16.220866356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:33:16.220966 env[1148]: time="2024-02-09T19:33:16.220917333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:33:16.221320 env[1148]: time="2024-02-09T19:33:16.220936206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:33:16.221320 env[1148]: time="2024-02-09T19:33:16.221155351Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69814bfa0f4ac66b2f67cb9de1291e209881dc96dd61009100ca7727901304bd pid=2825 runtime=io.containerd.runc.v2 Feb 9 19:33:16.250633 systemd[1]: run-containerd-runc-k8s.io-69814bfa0f4ac66b2f67cb9de1291e209881dc96dd61009100ca7727901304bd-runc.2HwZwa.mount: Deactivated successfully. Feb 9 19:33:16.254491 systemd[1]: Started cri-containerd-69814bfa0f4ac66b2f67cb9de1291e209881dc96dd61009100ca7727901304bd.scope. Feb 9 19:33:16.314704 env[1148]: time="2024-02-09T19:33:16.314647804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:19374c7d-e5a9-4ccd-866f-c9512598d11d,Namespace:default,Attempt:0,} returns sandbox id \"69814bfa0f4ac66b2f67cb9de1291e209881dc96dd61009100ca7727901304bd\"" Feb 9 19:33:16.317133 env[1148]: time="2024-02-09T19:33:16.317083479Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:33:16.481210 kubelet[1500]: E0209 19:33:16.481017 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:16.541516 env[1148]: time="2024-02-09T19:33:16.541447634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:33:16.544026 env[1148]: time="2024-02-09T19:33:16.543985363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:33:16.546302 env[1148]: time="2024-02-09T19:33:16.546266049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:33:16.548679 env[1148]: time="2024-02-09T19:33:16.548636995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:33:16.549719 env[1148]: time="2024-02-09T19:33:16.549662888Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:33:16.552661 env[1148]: time="2024-02-09T19:33:16.552618708Z" level=info msg="CreateContainer within sandbox \"69814bfa0f4ac66b2f67cb9de1291e209881dc96dd61009100ca7727901304bd\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:33:16.571831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805820754.mount: Deactivated successfully. Feb 9 19:33:16.577040 env[1148]: time="2024-02-09T19:33:16.576982866Z" level=info msg="CreateContainer within sandbox \"69814bfa0f4ac66b2f67cb9de1291e209881dc96dd61009100ca7727901304bd\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"65af7aee8ae1bc697b7632c9da86c3822d23d9f030cea7dfa67d015fcab4d227\"" Feb 9 19:33:16.577905 env[1148]: time="2024-02-09T19:33:16.577773326Z" level=info msg="StartContainer for \"65af7aee8ae1bc697b7632c9da86c3822d23d9f030cea7dfa67d015fcab4d227\"" Feb 9 19:33:16.601372 systemd[1]: Started cri-containerd-65af7aee8ae1bc697b7632c9da86c3822d23d9f030cea7dfa67d015fcab4d227.scope. Feb 9 19:33:16.648989 env[1148]: time="2024-02-09T19:33:16.648938139Z" level=info msg="StartContainer for \"65af7aee8ae1bc697b7632c9da86c3822d23d9f030cea7dfa67d015fcab4d227\" returns successfully" Feb 9 19:33:16.784268 kubelet[1500]: I0209 19:33:16.784104 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.550277033 podCreationTimestamp="2024-02-09 19:32:59 +0000 UTC" firstStartedPulling="2024-02-09 19:33:16.316371263 +0000 UTC m=+58.721227076" lastFinishedPulling="2024-02-09 19:33:16.550150485 +0000 UTC m=+58.955006299" observedRunningTime="2024-02-09 19:33:16.783600934 +0000 UTC m=+59.188456759" watchObservedRunningTime="2024-02-09 19:33:16.784056256 +0000 UTC m=+59.188912097" Feb 9 19:33:17.225009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228626084.mount: Deactivated successfully. Feb 9 19:33:17.481901 kubelet[1500]: E0209 19:33:17.481745 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:17.632450 systemd-networkd[1029]: lxc8b3017c9dea3: Gained IPv6LL Feb 9 19:33:18.438324 kubelet[1500]: E0209 19:33:18.438260 1500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:18.482091 kubelet[1500]: E0209 19:33:18.482031 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:19.482849 kubelet[1500]: E0209 19:33:19.482778 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:19.669011 systemd[1]: run-containerd-runc-k8s.io-102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418-runc.osgSyb.mount: Deactivated successfully. Feb 9 19:33:19.691381 env[1148]: time="2024-02-09T19:33:19.691303082Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:33:19.701799 env[1148]: time="2024-02-09T19:33:19.701747103Z" level=info msg="StopContainer for \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\" with timeout 1 (s)" Feb 9 19:33:19.702323 env[1148]: time="2024-02-09T19:33:19.702282005Z" level=info msg="Stop container \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\" with signal terminated" Feb 9 19:33:19.712014 systemd-networkd[1029]: lxc_health: Link DOWN Feb 9 19:33:19.712028 systemd-networkd[1029]: lxc_health: Lost carrier Feb 9 19:33:19.737620 systemd[1]: cri-containerd-102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418.scope: Deactivated successfully. Feb 9 19:33:19.737972 systemd[1]: cri-containerd-102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418.scope: Consumed 8.501s CPU time. Feb 9 19:33:19.764374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418-rootfs.mount: Deactivated successfully. Feb 9 19:33:20.483632 kubelet[1500]: E0209 19:33:20.483576 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:20.713957 env[1148]: time="2024-02-09T19:33:20.713873029Z" level=info msg="Kill container \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\"" Feb 9 19:33:21.341897 env[1148]: time="2024-02-09T19:33:21.341831789Z" level=info msg="shim disconnected" id=102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418 Feb 9 19:33:21.341897 env[1148]: time="2024-02-09T19:33:21.341896663Z" level=warning msg="cleaning up after shim disconnected" id=102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418 namespace=k8s.io Feb 9 19:33:21.342229 env[1148]: time="2024-02-09T19:33:21.341910834Z" level=info msg="cleaning up dead shim" Feb 9 19:33:21.354026 env[1148]: time="2024-02-09T19:33:21.353956059Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:33:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2959 runtime=io.containerd.runc.v2\n" Feb 9 19:33:21.356632 env[1148]: time="2024-02-09T19:33:21.356580456Z" level=info msg="StopContainer for \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\" returns successfully" Feb 9 19:33:21.357530 env[1148]: time="2024-02-09T19:33:21.357490675Z" level=info msg="StopPodSandbox for \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\"" Feb 9 19:33:21.357660 env[1148]: time="2024-02-09T19:33:21.357569157Z" level=info msg="Container to stop \"9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:33:21.357660 env[1148]: time="2024-02-09T19:33:21.357596502Z" level=info msg="Container to stop \"4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:33:21.357660 env[1148]: time="2024-02-09T19:33:21.357615522Z" level=info msg="Container to stop \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:33:21.357660 env[1148]: time="2024-02-09T19:33:21.357633753Z" level=info msg="Container to stop \"3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:33:21.357660 env[1148]: time="2024-02-09T19:33:21.357652753Z" level=info msg="Container to stop \"3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:33:21.360419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8-shm.mount: Deactivated successfully. Feb 9 19:33:21.369374 systemd[1]: cri-containerd-bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8.scope: Deactivated successfully. Feb 9 19:33:21.397616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8-rootfs.mount: Deactivated successfully. Feb 9 19:33:21.404411 env[1148]: time="2024-02-09T19:33:21.404354041Z" level=info msg="shim disconnected" id=bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8 Feb 9 19:33:21.404838 env[1148]: time="2024-02-09T19:33:21.404801602Z" level=warning msg="cleaning up after shim disconnected" id=bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8 namespace=k8s.io Feb 9 19:33:21.404838 env[1148]: time="2024-02-09T19:33:21.404833882Z" level=info msg="cleaning up dead shim" Feb 9 19:33:21.418093 env[1148]: time="2024-02-09T19:33:21.418026276Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:33:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2990 runtime=io.containerd.runc.v2\n" Feb 9 19:33:21.418524 env[1148]: time="2024-02-09T19:33:21.418484147Z" level=info msg="TearDown network for sandbox \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" successfully" Feb 9 19:33:21.418644 env[1148]: time="2024-02-09T19:33:21.418522818Z" level=info msg="StopPodSandbox for \"bb50a711aab39f1c36354671db7508307e68b7eec1a6e7ce47304a385b074cd8\" returns successfully" Feb 9 19:33:21.484055 kubelet[1500]: E0209 19:33:21.483992 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:21.583711 kubelet[1500]: I0209 19:33:21.583653 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b548d086-3545-4ee1-817d-f8a48345378c-cilium-config-path\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.583711 kubelet[1500]: I0209 19:33:21.583719 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cilium-run\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584004 kubelet[1500]: I0209 19:33:21.583752 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-host-proc-sys-net\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584004 kubelet[1500]: I0209 19:33:21.583784 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-bpf-maps\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584004 kubelet[1500]: I0209 19:33:21.583811 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-xtables-lock\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584004 kubelet[1500]: I0209 19:33:21.583843 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b548d086-3545-4ee1-817d-f8a48345378c-hubble-tls\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584004 kubelet[1500]: I0209 19:33:21.583872 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smmwx\" (UniqueName: \"kubernetes.io/projected/b548d086-3545-4ee1-817d-f8a48345378c-kube-api-access-smmwx\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584004 kubelet[1500]: I0209 19:33:21.583902 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-host-proc-sys-kernel\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584370 kubelet[1500]: I0209 19:33:21.583929 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cilium-cgroup\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584370 kubelet[1500]: I0209 19:33:21.583959 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-hostproc\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584370 kubelet[1500]: I0209 19:33:21.583996 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-etc-cni-netd\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584370 kubelet[1500]: I0209 19:33:21.584032 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b548d086-3545-4ee1-817d-f8a48345378c-clustermesh-secrets\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584370 kubelet[1500]: I0209 19:33:21.584068 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-lib-modules\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584370 kubelet[1500]: I0209 19:33:21.584100 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cni-path\") pod \"b548d086-3545-4ee1-817d-f8a48345378c\" (UID: \"b548d086-3545-4ee1-817d-f8a48345378c\") " Feb 9 19:33:21.584680 kubelet[1500]: I0209 19:33:21.584217 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cni-path" (OuterVolumeSpecName: "cni-path") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:21.584680 kubelet[1500]: W0209 19:33:21.584513 1500 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b548d086-3545-4ee1-817d-f8a48345378c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:33:21.586621 kubelet[1500]: I0209 19:33:21.586575 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:21.586862 kubelet[1500]: I0209 19:33:21.586828 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:21.587026 kubelet[1500]: I0209 19:33:21.587006 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:21.587191 kubelet[1500]: I0209 19:33:21.587157 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:21.587352 kubelet[1500]: I0209 19:33:21.587321 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b548d086-3545-4ee1-817d-f8a48345378c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:33:21.587444 kubelet[1500]: I0209 19:33:21.587389 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:21.587444 kubelet[1500]: I0209 19:33:21.587423 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:21.587573 kubelet[1500]: I0209 19:33:21.587510 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-hostproc" (OuterVolumeSpecName: "hostproc") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:21.587573 kubelet[1500]: I0209 19:33:21.587541 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:21.588001 kubelet[1500]: I0209 19:33:21.587974 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:21.593785 systemd[1]: var-lib-kubelet-pods-b548d086\x2d3545\x2d4ee1\x2d817d\x2df8a48345378c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:33:21.597121 systemd[1]: var-lib-kubelet-pods-b548d086\x2d3545\x2d4ee1\x2d817d\x2df8a48345378c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsmmwx.mount: Deactivated successfully. Feb 9 19:33:21.599682 kubelet[1500]: I0209 19:33:21.599647 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b548d086-3545-4ee1-817d-f8a48345378c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:33:21.600770 kubelet[1500]: I0209 19:33:21.600737 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b548d086-3545-4ee1-817d-f8a48345378c-kube-api-access-smmwx" (OuterVolumeSpecName: "kube-api-access-smmwx") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "kube-api-access-smmwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:33:21.606143 systemd[1]: var-lib-kubelet-pods-b548d086\x2d3545\x2d4ee1\x2d817d\x2df8a48345378c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:33:21.606983 kubelet[1500]: I0209 19:33:21.606239 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b548d086-3545-4ee1-817d-f8a48345378c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b548d086-3545-4ee1-817d-f8a48345378c" (UID: "b548d086-3545-4ee1-817d-f8a48345378c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:33:21.684886 kubelet[1500]: I0209 19:33:21.684830 1500 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-bpf-maps\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.684886 kubelet[1500]: I0209 19:33:21.684878 1500 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-xtables-lock\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.684886 kubelet[1500]: I0209 19:33:21.684896 1500 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b548d086-3545-4ee1-817d-f8a48345378c-hubble-tls\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.684886 kubelet[1500]: I0209 19:33:21.684915 1500 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-smmwx\" (UniqueName: \"kubernetes.io/projected/b548d086-3545-4ee1-817d-f8a48345378c-kube-api-access-smmwx\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.685342 kubelet[1500]: I0209 19:33:21.684932 1500 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-host-proc-sys-kernel\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.685342 kubelet[1500]: I0209 19:33:21.684946 1500 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cilium-cgroup\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.685342 kubelet[1500]: I0209 19:33:21.684962 1500 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-hostproc\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.685342 kubelet[1500]: I0209 19:33:21.684976 1500 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-etc-cni-netd\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.685342 kubelet[1500]: I0209 19:33:21.684990 1500 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b548d086-3545-4ee1-817d-f8a48345378c-clustermesh-secrets\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.685342 kubelet[1500]: I0209 19:33:21.685005 1500 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-lib-modules\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.685342 kubelet[1500]: I0209 19:33:21.685020 1500 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cni-path\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.685342 kubelet[1500]: I0209 19:33:21.685034 1500 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b548d086-3545-4ee1-817d-f8a48345378c-cilium-config-path\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.685612 kubelet[1500]: I0209 19:33:21.685050 1500 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-cilium-run\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.685612 kubelet[1500]: I0209 19:33:21.685068 1500 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b548d086-3545-4ee1-817d-f8a48345378c-host-proc-sys-net\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:21.785386 kubelet[1500]: I0209 19:33:21.785340 1500 scope.go:115] "RemoveContainer" containerID="102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418" Feb 9 19:33:21.790538 systemd[1]: Removed slice kubepods-burstable-podb548d086_3545_4ee1_817d_f8a48345378c.slice. Feb 9 19:33:21.790711 systemd[1]: kubepods-burstable-podb548d086_3545_4ee1_817d_f8a48345378c.slice: Consumed 8.643s CPU time. Feb 9 19:33:21.793728 env[1148]: time="2024-02-09T19:33:21.793681894Z" level=info msg="RemoveContainer for \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\"" Feb 9 19:33:21.799831 env[1148]: time="2024-02-09T19:33:21.799774025Z" level=info msg="RemoveContainer for \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\" returns successfully" Feb 9 19:33:21.800123 kubelet[1500]: I0209 19:33:21.800096 1500 scope.go:115] "RemoveContainer" containerID="4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f" Feb 9 19:33:21.801491 env[1148]: time="2024-02-09T19:33:21.801453630Z" level=info msg="RemoveContainer for \"4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f\"" Feb 9 19:33:21.805071 env[1148]: time="2024-02-09T19:33:21.805012911Z" level=info msg="RemoveContainer for \"4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f\" returns successfully" Feb 9 19:33:21.805300 kubelet[1500]: I0209 19:33:21.805261 1500 scope.go:115] "RemoveContainer" containerID="3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223" Feb 9 19:33:21.806575 env[1148]: time="2024-02-09T19:33:21.806542005Z" level=info msg="RemoveContainer for \"3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223\"" Feb 9 19:33:21.810543 env[1148]: time="2024-02-09T19:33:21.810504422Z" level=info msg="RemoveContainer for \"3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223\" returns successfully" Feb 9 19:33:21.810760 kubelet[1500]: I0209 19:33:21.810726 1500 scope.go:115] "RemoveContainer" containerID="3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494" Feb 9 19:33:21.812084 env[1148]: time="2024-02-09T19:33:21.812040493Z" level=info msg="RemoveContainer for \"3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494\"" Feb 9 19:33:21.815849 env[1148]: time="2024-02-09T19:33:21.815790882Z" level=info msg="RemoveContainer for \"3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494\" returns successfully" Feb 9 19:33:21.816094 kubelet[1500]: I0209 19:33:21.816064 1500 scope.go:115] "RemoveContainer" containerID="9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e" Feb 9 19:33:21.817714 env[1148]: time="2024-02-09T19:33:21.817652677Z" level=info msg="RemoveContainer for \"9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e\"" Feb 9 19:33:21.821531 env[1148]: time="2024-02-09T19:33:21.821496867Z" level=info msg="RemoveContainer for \"9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e\" returns successfully" Feb 9 19:33:21.821814 kubelet[1500]: I0209 19:33:21.821766 1500 scope.go:115] "RemoveContainer" containerID="102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418" Feb 9 19:33:21.822486 env[1148]: time="2024-02-09T19:33:21.822384478Z" level=error msg="ContainerStatus for \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\": not found" Feb 9 19:33:21.822653 kubelet[1500]: E0209 19:33:21.822628 1500 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\": not found" containerID="102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418" Feb 9 19:33:21.822745 kubelet[1500]: I0209 19:33:21.822678 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418} err="failed to get container status \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\": rpc error: code = NotFound desc = an error occurred when try to find container \"102b6ff60756d69ea4744114155997805404a5358d0c656b41cad4b321ebd418\": not found" Feb 9 19:33:21.822745 kubelet[1500]: I0209 19:33:21.822699 1500 scope.go:115] "RemoveContainer" containerID="4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f" Feb 9 19:33:21.823053 env[1148]: time="2024-02-09T19:33:21.822977658Z" level=error msg="ContainerStatus for \"4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f\": not found" Feb 9 19:33:21.823201 kubelet[1500]: E0209 19:33:21.823156 1500 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f\": not found" containerID="4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f" Feb 9 19:33:21.823287 kubelet[1500]: I0209 19:33:21.823235 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f} err="failed to get container status \"4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ea3ddb462bfdf839589dd465b854d1df76472208ecdb39f841d46bfae073d3f\": not found" Feb 9 19:33:21.823287 kubelet[1500]: I0209 19:33:21.823261 1500 scope.go:115] "RemoveContainer" containerID="3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223" Feb 9 19:33:21.823588 env[1148]: time="2024-02-09T19:33:21.823517021Z" level=error msg="ContainerStatus for \"3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223\": not found" Feb 9 19:33:21.823711 kubelet[1500]: E0209 19:33:21.823691 1500 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223\": not found" containerID="3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223" Feb 9 19:33:21.823795 kubelet[1500]: I0209 19:33:21.823731 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223} err="failed to get container status \"3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223\": rpc error: code = NotFound desc = an error occurred when try to find container \"3faff12f1e5efd452bee01234fef905e4e436dfced1acedad47ff9811ed19223\": not found" Feb 9 19:33:21.823795 kubelet[1500]: I0209 19:33:21.823755 1500 scope.go:115] "RemoveContainer" containerID="3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494" Feb 9 19:33:21.824150 env[1148]: time="2024-02-09T19:33:21.824089323Z" level=error msg="ContainerStatus for \"3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494\": not found" Feb 9 19:33:21.824310 kubelet[1500]: E0209 19:33:21.824286 1500 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494\": not found" containerID="3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494" Feb 9 19:33:21.824409 kubelet[1500]: I0209 19:33:21.824330 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494} err="failed to get container status \"3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fb26fcedcbfdcd86e6551e41e80bf04832c869865eed1c825df538479c73494\": not found" Feb 9 19:33:21.824409 kubelet[1500]: I0209 19:33:21.824344 1500 scope.go:115] "RemoveContainer" containerID="9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e" Feb 9 19:33:21.824610 env[1148]: time="2024-02-09T19:33:21.824543878Z" level=error msg="ContainerStatus for \"9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e\": not found" Feb 9 19:33:21.824854 kubelet[1500]: E0209 19:33:21.824811 1500 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e\": not found" containerID="9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e" Feb 9 19:33:21.824854 kubelet[1500]: I0209 19:33:21.824854 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e} err="failed to get container status \"9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a4a91947523736c629600e692555e9229457569f497cead219ce6fb33c2684e\": not found" Feb 9 19:33:22.484903 kubelet[1500]: E0209 19:33:22.484834 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:22.611245 kubelet[1500]: I0209 19:33:22.611176 1500 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b548d086-3545-4ee1-817d-f8a48345378c path="/var/lib/kubelet/pods/b548d086-3545-4ee1-817d-f8a48345378c/volumes" Feb 9 19:33:23.118063 kubelet[1500]: I0209 19:33:23.118014 1500 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:33:23.118377 kubelet[1500]: E0209 19:33:23.118097 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b548d086-3545-4ee1-817d-f8a48345378c" containerName="apply-sysctl-overwrites" Feb 9 19:33:23.118377 kubelet[1500]: E0209 19:33:23.118115 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b548d086-3545-4ee1-817d-f8a48345378c" containerName="clean-cilium-state" Feb 9 19:33:23.118377 kubelet[1500]: E0209 19:33:23.118128 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b548d086-3545-4ee1-817d-f8a48345378c" containerName="mount-cgroup" Feb 9 19:33:23.118377 kubelet[1500]: E0209 19:33:23.118138 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b548d086-3545-4ee1-817d-f8a48345378c" containerName="mount-bpf-fs" Feb 9 19:33:23.118377 kubelet[1500]: E0209 19:33:23.118148 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b548d086-3545-4ee1-817d-f8a48345378c" containerName="cilium-agent" Feb 9 19:33:23.118377 kubelet[1500]: I0209 19:33:23.118199 1500 memory_manager.go:346] "RemoveStaleState removing state" podUID="b548d086-3545-4ee1-817d-f8a48345378c" containerName="cilium-agent" Feb 9 19:33:23.124922 systemd[1]: Created slice kubepods-besteffort-podc1014bff_7e53_4d3e_84e3_ab612a083726.slice. Feb 9 19:33:23.138520 kubelet[1500]: W0209 19:33:23.138482 1500 reflector.go:533] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.128.0.33" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.33' and this object Feb 9 19:33:23.138520 kubelet[1500]: E0209 19:33:23.138528 1500 reflector.go:148] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.128.0.33" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.33' and this object Feb 9 19:33:23.226792 kubelet[1500]: I0209 19:33:23.226746 1500 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:33:23.233899 systemd[1]: Created slice kubepods-burstable-pod6098ecda_7681_4e7e_9ae5_5526d4d18877.slice. Feb 9 19:33:23.294467 kubelet[1500]: I0209 19:33:23.294420 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1014bff-7e53-4d3e-84e3-ab612a083726-cilium-config-path\") pod \"cilium-operator-574c4bb98d-4zzzp\" (UID: \"c1014bff-7e53-4d3e-84e3-ab612a083726\") " pod="kube-system/cilium-operator-574c4bb98d-4zzzp" Feb 9 19:33:23.294467 kubelet[1500]: I0209 19:33:23.294488 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prkk8\" (UniqueName: \"kubernetes.io/projected/c1014bff-7e53-4d3e-84e3-ab612a083726-kube-api-access-prkk8\") pod \"cilium-operator-574c4bb98d-4zzzp\" (UID: \"c1014bff-7e53-4d3e-84e3-ab612a083726\") " pod="kube-system/cilium-operator-574c4bb98d-4zzzp" Feb 9 19:33:23.394942 kubelet[1500]: I0209 19:33:23.394802 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-bpf-maps\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.394942 kubelet[1500]: I0209 19:33:23.394867 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-config-path\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.394942 kubelet[1500]: I0209 19:33:23.394901 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-host-proc-sys-net\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.395851 kubelet[1500]: I0209 19:33:23.395816 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-lib-modules\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.395981 kubelet[1500]: I0209 19:33:23.395880 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6098ecda-7681-4e7e-9ae5-5526d4d18877-clustermesh-secrets\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.395981 kubelet[1500]: I0209 19:33:23.395916 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-ipsec-secrets\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.395981 kubelet[1500]: I0209 19:33:23.395948 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-host-proc-sys-kernel\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.396155 kubelet[1500]: I0209 19:33:23.396004 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6098ecda-7681-4e7e-9ae5-5526d4d18877-hubble-tls\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.396155 kubelet[1500]: I0209 19:33:23.396041 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8pv7\" (UniqueName: \"kubernetes.io/projected/6098ecda-7681-4e7e-9ae5-5526d4d18877-kube-api-access-f8pv7\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.396155 kubelet[1500]: I0209 19:33:23.396076 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-hostproc\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.396155 kubelet[1500]: I0209 19:33:23.396113 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-cgroup\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.396155 kubelet[1500]: I0209 19:33:23.396150 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cni-path\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.396448 kubelet[1500]: I0209 19:33:23.396216 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-etc-cni-netd\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.396448 kubelet[1500]: I0209 19:33:23.396253 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-run\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.396448 kubelet[1500]: I0209 19:33:23.396290 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-xtables-lock\") pod \"cilium-zdrwp\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " pod="kube-system/cilium-zdrwp" Feb 9 19:33:23.486019 kubelet[1500]: E0209 19:33:23.485949 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:23.573057 kubelet[1500]: E0209 19:33:23.573014 1500 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:33:24.142496 env[1148]: time="2024-02-09T19:33:24.142434662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdrwp,Uid:6098ecda-7681-4e7e-9ae5-5526d4d18877,Namespace:kube-system,Attempt:0,}" Feb 9 19:33:24.160758 env[1148]: time="2024-02-09T19:33:24.160230890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:33:24.160758 env[1148]: time="2024-02-09T19:33:24.160283985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:33:24.160758 env[1148]: time="2024-02-09T19:33:24.160311647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:33:24.160758 env[1148]: time="2024-02-09T19:33:24.160483295Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74 pid=3018 runtime=io.containerd.runc.v2 Feb 9 19:33:24.178831 systemd[1]: Started cri-containerd-66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74.scope. Feb 9 19:33:24.214728 env[1148]: time="2024-02-09T19:33:24.214658303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdrwp,Uid:6098ecda-7681-4e7e-9ae5-5526d4d18877,Namespace:kube-system,Attempt:0,} returns sandbox id \"66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74\"" Feb 9 19:33:24.218923 env[1148]: time="2024-02-09T19:33:24.218859434Z" level=info msg="CreateContainer within sandbox \"66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:33:24.233954 env[1148]: time="2024-02-09T19:33:24.233895385Z" level=info msg="CreateContainer within sandbox \"66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226\"" Feb 9 19:33:24.234673 env[1148]: time="2024-02-09T19:33:24.234626184Z" level=info msg="StartContainer for \"53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226\"" Feb 9 19:33:24.256809 systemd[1]: Started cri-containerd-53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226.scope. Feb 9 19:33:24.272400 systemd[1]: cri-containerd-53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226.scope: Deactivated successfully. Feb 9 19:33:24.292130 env[1148]: time="2024-02-09T19:33:24.292057264Z" level=info msg="shim disconnected" id=53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226 Feb 9 19:33:24.292130 env[1148]: time="2024-02-09T19:33:24.292132003Z" level=warning msg="cleaning up after shim disconnected" id=53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226 namespace=k8s.io Feb 9 19:33:24.292513 env[1148]: time="2024-02-09T19:33:24.292146515Z" level=info msg="cleaning up dead shim" Feb 9 19:33:24.304928 env[1148]: time="2024-02-09T19:33:24.304863516Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:33:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3079 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:33:24Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:33:24.305363 env[1148]: time="2024-02-09T19:33:24.305211934Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Feb 9 19:33:24.306306 env[1148]: time="2024-02-09T19:33:24.306252415Z" level=error msg="Failed to pipe stdout of container \"53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226\"" error="reading from a closed fifo" Feb 9 19:33:24.306516 env[1148]: time="2024-02-09T19:33:24.306474111Z" level=error msg="Failed to pipe stderr of container \"53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226\"" error="reading from a closed fifo" Feb 9 19:33:24.308518 env[1148]: time="2024-02-09T19:33:24.308448065Z" level=error msg="StartContainer for \"53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:33:24.308873 kubelet[1500]: E0209 19:33:24.308831 1500 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226" Feb 9 19:33:24.309088 kubelet[1500]: E0209 19:33:24.309065 1500 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:33:24.309088 kubelet[1500]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:33:24.309088 kubelet[1500]: rm /hostbin/cilium-mount Feb 9 19:33:24.309641 kubelet[1500]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-f8pv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-zdrwp_kube-system(6098ecda-7681-4e7e-9ae5-5526d4d18877): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:33:24.309641 kubelet[1500]: E0209 19:33:24.309143 1500 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zdrwp" podUID=6098ecda-7681-4e7e-9ae5-5526d4d18877 Feb 9 19:33:24.328731 env[1148]: time="2024-02-09T19:33:24.328674938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-4zzzp,Uid:c1014bff-7e53-4d3e-84e3-ab612a083726,Namespace:kube-system,Attempt:0,}" Feb 9 19:33:24.347409 env[1148]: time="2024-02-09T19:33:24.347322717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:33:24.347409 env[1148]: time="2024-02-09T19:33:24.347378717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:33:24.347727 env[1148]: time="2024-02-09T19:33:24.347653161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:33:24.348128 env[1148]: time="2024-02-09T19:33:24.348070827Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afac29fdb989ea0f062a3a5c9d69fa30d4226e9a1da9ffc8a2920f51a4647f11 pid=3101 runtime=io.containerd.runc.v2 Feb 9 19:33:24.365793 systemd[1]: Started cri-containerd-afac29fdb989ea0f062a3a5c9d69fa30d4226e9a1da9ffc8a2920f51a4647f11.scope. Feb 9 19:33:24.429706 env[1148]: time="2024-02-09T19:33:24.429578166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-4zzzp,Uid:c1014bff-7e53-4d3e-84e3-ab612a083726,Namespace:kube-system,Attempt:0,} returns sandbox id \"afac29fdb989ea0f062a3a5c9d69fa30d4226e9a1da9ffc8a2920f51a4647f11\"" Feb 9 19:33:24.436366 kubelet[1500]: E0209 19:33:24.436306 1500 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Feb 9 19:33:24.436765 env[1148]: time="2024-02-09T19:33:24.436733936Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:33:24.486673 kubelet[1500]: E0209 19:33:24.486600 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:24.796781 env[1148]: time="2024-02-09T19:33:24.796720915Z" level=info msg="StopPodSandbox for \"66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74\"" Feb 9 19:33:24.803739 env[1148]: time="2024-02-09T19:33:24.796794569Z" level=info msg="Container to stop \"53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:33:24.799738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74-shm.mount: Deactivated successfully. Feb 9 19:33:24.808316 systemd[1]: cri-containerd-66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74.scope: Deactivated successfully. Feb 9 19:33:24.838639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74-rootfs.mount: Deactivated successfully. Feb 9 19:33:24.843506 env[1148]: time="2024-02-09T19:33:24.843450314Z" level=info msg="shim disconnected" id=66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74 Feb 9 19:33:24.844145 env[1148]: time="2024-02-09T19:33:24.844109431Z" level=warning msg="cleaning up after shim disconnected" id=66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74 namespace=k8s.io Feb 9 19:33:24.844376 env[1148]: time="2024-02-09T19:33:24.844332448Z" level=info msg="cleaning up dead shim" Feb 9 19:33:24.855465 env[1148]: time="2024-02-09T19:33:24.855419508Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:33:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3154 runtime=io.containerd.runc.v2\n" Feb 9 19:33:24.855843 env[1148]: time="2024-02-09T19:33:24.855804764Z" level=info msg="TearDown network for sandbox \"66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74\" successfully" Feb 9 19:33:24.855950 env[1148]: time="2024-02-09T19:33:24.855842250Z" level=info msg="StopPodSandbox for \"66f10a000f47f4b370f337925eb282fe604c7038dbb5b8cc9d73b68b1ce55e74\" returns successfully" Feb 9 19:33:25.006654 kubelet[1500]: I0209 19:33:25.006579 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-ipsec-secrets\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.006654 kubelet[1500]: I0209 19:33:25.006640 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-host-proc-sys-kernel\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.006953 kubelet[1500]: I0209 19:33:25.006675 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-cgroup\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.006953 kubelet[1500]: I0209 19:33:25.006703 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cni-path\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.006953 kubelet[1500]: I0209 19:33:25.006729 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-run\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.006953 kubelet[1500]: I0209 19:33:25.006761 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6098ecda-7681-4e7e-9ae5-5526d4d18877-clustermesh-secrets\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.006953 kubelet[1500]: I0209 19:33:25.006791 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8pv7\" (UniqueName: \"kubernetes.io/projected/6098ecda-7681-4e7e-9ae5-5526d4d18877-kube-api-access-f8pv7\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.006953 kubelet[1500]: I0209 19:33:25.006817 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-etc-cni-netd\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.006953 kubelet[1500]: I0209 19:33:25.006842 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-hostproc\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.006953 kubelet[1500]: I0209 19:33:25.006871 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-xtables-lock\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.006953 kubelet[1500]: I0209 19:33:25.006902 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6098ecda-7681-4e7e-9ae5-5526d4d18877-hubble-tls\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.006953 kubelet[1500]: I0209 19:33:25.006937 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-bpf-maps\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.007496 kubelet[1500]: I0209 19:33:25.006975 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-config-path\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.007496 kubelet[1500]: I0209 19:33:25.007014 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-host-proc-sys-net\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.007496 kubelet[1500]: I0209 19:33:25.007046 1500 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-lib-modules\") pod \"6098ecda-7681-4e7e-9ae5-5526d4d18877\" (UID: \"6098ecda-7681-4e7e-9ae5-5526d4d18877\") " Feb 9 19:33:25.007496 kubelet[1500]: I0209 19:33:25.007116 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:25.008298 kubelet[1500]: I0209 19:33:25.007788 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:25.008298 kubelet[1500]: I0209 19:33:25.007853 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:25.008298 kubelet[1500]: I0209 19:33:25.007882 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:25.008298 kubelet[1500]: I0209 19:33:25.007909 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cni-path" (OuterVolumeSpecName: "cni-path") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:25.008298 kubelet[1500]: I0209 19:33:25.007933 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:25.008854 kubelet[1500]: I0209 19:33:25.008816 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-hostproc" (OuterVolumeSpecName: "hostproc") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:25.009024 kubelet[1500]: I0209 19:33:25.009002 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:25.009576 kubelet[1500]: I0209 19:33:25.009544 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:25.010022 kubelet[1500]: W0209 19:33:25.009967 1500 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/6098ecda-7681-4e7e-9ae5-5526d4d18877/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:33:25.011307 kubelet[1500]: I0209 19:33:25.011272 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:33:25.018511 kubelet[1500]: I0209 19:33:25.012250 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:33:25.014465 systemd[1]: var-lib-kubelet-pods-6098ecda\x2d7681\x2d4e7e\x2d9ae5\x2d5526d4d18877-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:33:25.027150 kubelet[1500]: I0209 19:33:25.022707 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6098ecda-7681-4e7e-9ae5-5526d4d18877-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:33:25.027150 kubelet[1500]: I0209 19:33:25.025288 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:33:25.023604 systemd[1]: var-lib-kubelet-pods-6098ecda\x2d7681\x2d4e7e\x2d9ae5\x2d5526d4d18877-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:33:25.029775 kubelet[1500]: I0209 19:33:25.029739 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6098ecda-7681-4e7e-9ae5-5526d4d18877-kube-api-access-f8pv7" (OuterVolumeSpecName: "kube-api-access-f8pv7") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "kube-api-access-f8pv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:33:25.030238 kubelet[1500]: I0209 19:33:25.030211 1500 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6098ecda-7681-4e7e-9ae5-5526d4d18877-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6098ecda-7681-4e7e-9ae5-5526d4d18877" (UID: "6098ecda-7681-4e7e-9ae5-5526d4d18877"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:33:25.107647 kubelet[1500]: I0209 19:33:25.107511 1500 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-cgroup\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.107647 kubelet[1500]: I0209 19:33:25.107550 1500 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-ipsec-secrets\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.107647 kubelet[1500]: I0209 19:33:25.107568 1500 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-host-proc-sys-kernel\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.107647 kubelet[1500]: I0209 19:33:25.107583 1500 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6098ecda-7681-4e7e-9ae5-5526d4d18877-clustermesh-secrets\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.107647 kubelet[1500]: I0209 19:33:25.107603 1500 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f8pv7\" (UniqueName: \"kubernetes.io/projected/6098ecda-7681-4e7e-9ae5-5526d4d18877-kube-api-access-f8pv7\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.107647 kubelet[1500]: I0209 19:33:25.107619 1500 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cni-path\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.109266 kubelet[1500]: I0209 19:33:25.109242 1500 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-run\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.109434 kubelet[1500]: I0209 19:33:25.109417 1500 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-etc-cni-netd\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.109560 kubelet[1500]: I0209 19:33:25.109547 1500 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6098ecda-7681-4e7e-9ae5-5526d4d18877-hubble-tls\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.109710 kubelet[1500]: I0209 19:33:25.109697 1500 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-hostproc\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.109859 kubelet[1500]: I0209 19:33:25.109844 1500 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-xtables-lock\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.110021 kubelet[1500]: I0209 19:33:25.110005 1500 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-host-proc-sys-net\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.110204 kubelet[1500]: I0209 19:33:25.110163 1500 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-lib-modules\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.110368 kubelet[1500]: I0209 19:33:25.110327 1500 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6098ecda-7681-4e7e-9ae5-5526d4d18877-bpf-maps\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.110368 kubelet[1500]: I0209 19:33:25.110356 1500 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6098ecda-7681-4e7e-9ae5-5526d4d18877-cilium-config-path\") on node \"10.128.0.33\" DevicePath \"\"" Feb 9 19:33:25.413526 systemd[1]: var-lib-kubelet-pods-6098ecda\x2d7681\x2d4e7e\x2d9ae5\x2d5526d4d18877-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df8pv7.mount: Deactivated successfully. Feb 9 19:33:25.414049 systemd[1]: var-lib-kubelet-pods-6098ecda\x2d7681\x2d4e7e\x2d9ae5\x2d5526d4d18877-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:33:25.480496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622755216.mount: Deactivated successfully. Feb 9 19:33:25.487781 kubelet[1500]: E0209 19:33:25.487726 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:25.799637 kubelet[1500]: I0209 19:33:25.799597 1500 scope.go:115] "RemoveContainer" containerID="53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226" Feb 9 19:33:25.805620 systemd[1]: Removed slice kubepods-burstable-pod6098ecda_7681_4e7e_9ae5_5526d4d18877.slice. Feb 9 19:33:25.807384 env[1148]: time="2024-02-09T19:33:25.807337252Z" level=info msg="RemoveContainer for \"53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226\"" Feb 9 19:33:25.811814 env[1148]: time="2024-02-09T19:33:25.811773563Z" level=info msg="RemoveContainer for \"53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226\" returns successfully" Feb 9 19:33:25.890890 kubelet[1500]: I0209 19:33:25.890853 1500 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:33:25.891219 kubelet[1500]: E0209 19:33:25.891171 1500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6098ecda-7681-4e7e-9ae5-5526d4d18877" containerName="mount-cgroup" Feb 9 19:33:25.891407 kubelet[1500]: I0209 19:33:25.891393 1500 memory_manager.go:346] "RemoveStaleState removing state" podUID="6098ecda-7681-4e7e-9ae5-5526d4d18877" containerName="mount-cgroup" Feb 9 19:33:25.900873 systemd[1]: Created slice kubepods-burstable-pod404f19c1_7223_4803_9923_2b48b7c6224e.slice. Feb 9 19:33:26.016523 kubelet[1500]: I0209 19:33:26.016484 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/404f19c1-7223-4803-9923-2b48b7c6224e-lib-modules\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.016835 kubelet[1500]: I0209 19:33:26.016818 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/404f19c1-7223-4803-9923-2b48b7c6224e-bpf-maps\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.016997 kubelet[1500]: I0209 19:33:26.016984 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/404f19c1-7223-4803-9923-2b48b7c6224e-cni-path\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.017213 kubelet[1500]: I0209 19:33:26.017163 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/404f19c1-7223-4803-9923-2b48b7c6224e-etc-cni-netd\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.017408 kubelet[1500]: I0209 19:33:26.017395 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2ndd\" (UniqueName: \"kubernetes.io/projected/404f19c1-7223-4803-9923-2b48b7c6224e-kube-api-access-h2ndd\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.017544 kubelet[1500]: I0209 19:33:26.017529 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/404f19c1-7223-4803-9923-2b48b7c6224e-xtables-lock\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.017687 kubelet[1500]: I0209 19:33:26.017672 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/404f19c1-7223-4803-9923-2b48b7c6224e-host-proc-sys-kernel\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.017819 kubelet[1500]: I0209 19:33:26.017807 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/404f19c1-7223-4803-9923-2b48b7c6224e-cilium-run\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.018146 kubelet[1500]: I0209 19:33:26.018129 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/404f19c1-7223-4803-9923-2b48b7c6224e-clustermesh-secrets\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.018381 kubelet[1500]: I0209 19:33:26.018366 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/404f19c1-7223-4803-9923-2b48b7c6224e-hostproc\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.018575 kubelet[1500]: I0209 19:33:26.018560 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/404f19c1-7223-4803-9923-2b48b7c6224e-cilium-cgroup\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.018773 kubelet[1500]: I0209 19:33:26.018760 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/404f19c1-7223-4803-9923-2b48b7c6224e-hubble-tls\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.018960 kubelet[1500]: I0209 19:33:26.018946 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/404f19c1-7223-4803-9923-2b48b7c6224e-cilium-config-path\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.019145 kubelet[1500]: I0209 19:33:26.019132 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/404f19c1-7223-4803-9923-2b48b7c6224e-cilium-ipsec-secrets\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.019362 kubelet[1500]: I0209 19:33:26.019347 1500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/404f19c1-7223-4803-9923-2b48b7c6224e-host-proc-sys-net\") pod \"cilium-88jlc\" (UID: \"404f19c1-7223-4803-9923-2b48b7c6224e\") " pod="kube-system/cilium-88jlc" Feb 9 19:33:26.212544 env[1148]: time="2024-02-09T19:33:26.210606126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-88jlc,Uid:404f19c1-7223-4803-9923-2b48b7c6224e,Namespace:kube-system,Attempt:0,}" Feb 9 19:33:26.241126 env[1148]: time="2024-02-09T19:33:26.241012892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:33:26.241126 env[1148]: time="2024-02-09T19:33:26.241084812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:33:26.241707 env[1148]: time="2024-02-09T19:33:26.241653586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:33:26.242427 env[1148]: time="2024-02-09T19:33:26.242373202Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4 pid=3184 runtime=io.containerd.runc.v2 Feb 9 19:33:26.288239 systemd[1]: Started cri-containerd-c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4.scope. Feb 9 19:33:26.334837 env[1148]: time="2024-02-09T19:33:26.334782720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-88jlc,Uid:404f19c1-7223-4803-9923-2b48b7c6224e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4\"" Feb 9 19:33:26.338747 env[1148]: time="2024-02-09T19:33:26.338705540Z" level=info msg="CreateContainer within sandbox \"c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:33:26.367906 env[1148]: time="2024-02-09T19:33:26.367850310Z" level=info msg="CreateContainer within sandbox \"c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f214201db2d7beaa01c1b8cf64f3779aeefc4f8c59a699a2ea0951dfc9d1043c\"" Feb 9 19:33:26.368805 env[1148]: time="2024-02-09T19:33:26.368767987Z" level=info msg="StartContainer for \"f214201db2d7beaa01c1b8cf64f3779aeefc4f8c59a699a2ea0951dfc9d1043c\"" Feb 9 19:33:26.396654 systemd[1]: Started cri-containerd-f214201db2d7beaa01c1b8cf64f3779aeefc4f8c59a699a2ea0951dfc9d1043c.scope. Feb 9 19:33:26.462118 env[1148]: time="2024-02-09T19:33:26.462062026Z" level=info msg="StartContainer for \"f214201db2d7beaa01c1b8cf64f3779aeefc4f8c59a699a2ea0951dfc9d1043c\" returns successfully" Feb 9 19:33:26.466802 systemd[1]: cri-containerd-f214201db2d7beaa01c1b8cf64f3779aeefc4f8c59a699a2ea0951dfc9d1043c.scope: Deactivated successfully. Feb 9 19:33:26.488596 kubelet[1500]: E0209 19:33:26.488558 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:26.502785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f214201db2d7beaa01c1b8cf64f3779aeefc4f8c59a699a2ea0951dfc9d1043c-rootfs.mount: Deactivated successfully. Feb 9 19:33:26.635849 kubelet[1500]: I0209 19:33:26.635339 1500 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=6098ecda-7681-4e7e-9ae5-5526d4d18877 path="/var/lib/kubelet/pods/6098ecda-7681-4e7e-9ae5-5526d4d18877/volumes" Feb 9 19:33:26.663345 env[1148]: time="2024-02-09T19:33:26.663287337Z" level=info msg="shim disconnected" id=f214201db2d7beaa01c1b8cf64f3779aeefc4f8c59a699a2ea0951dfc9d1043c Feb 9 19:33:26.663686 env[1148]: time="2024-02-09T19:33:26.663657534Z" level=warning msg="cleaning up after shim disconnected" id=f214201db2d7beaa01c1b8cf64f3779aeefc4f8c59a699a2ea0951dfc9d1043c namespace=k8s.io Feb 9 19:33:26.663826 env[1148]: time="2024-02-09T19:33:26.663803274Z" level=info msg="cleaning up dead shim" Feb 9 19:33:26.677167 env[1148]: time="2024-02-09T19:33:26.677107569Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:33:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3269 runtime=io.containerd.runc.v2\n" Feb 9 19:33:26.710133 env[1148]: time="2024-02-09T19:33:26.710071355Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:33:26.712531 env[1148]: time="2024-02-09T19:33:26.712483638Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:33:26.714867 env[1148]: time="2024-02-09T19:33:26.714822486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:33:26.715666 env[1148]: time="2024-02-09T19:33:26.715623095Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:33:26.718862 env[1148]: time="2024-02-09T19:33:26.718078759Z" level=info msg="CreateContainer within sandbox \"afac29fdb989ea0f062a3a5c9d69fa30d4226e9a1da9ffc8a2920f51a4647f11\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:33:26.737758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1810107317.mount: Deactivated successfully. Feb 9 19:33:26.745609 env[1148]: time="2024-02-09T19:33:26.745551688Z" level=info msg="CreateContainer within sandbox \"afac29fdb989ea0f062a3a5c9d69fa30d4226e9a1da9ffc8a2920f51a4647f11\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bf9867272a0c4ae87fab6020b0846b4231fd49c71dd79b3b0ee787b85ab6df2b\"" Feb 9 19:33:26.746454 env[1148]: time="2024-02-09T19:33:26.746417960Z" level=info msg="StartContainer for \"bf9867272a0c4ae87fab6020b0846b4231fd49c71dd79b3b0ee787b85ab6df2b\"" Feb 9 19:33:26.769353 systemd[1]: Started cri-containerd-bf9867272a0c4ae87fab6020b0846b4231fd49c71dd79b3b0ee787b85ab6df2b.scope. Feb 9 19:33:26.810128 env[1148]: time="2024-02-09T19:33:26.810036586Z" level=info msg="CreateContainer within sandbox \"c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:33:26.822078 env[1148]: time="2024-02-09T19:33:26.822026077Z" level=info msg="StartContainer for \"bf9867272a0c4ae87fab6020b0846b4231fd49c71dd79b3b0ee787b85ab6df2b\" returns successfully" Feb 9 19:33:26.835954 env[1148]: time="2024-02-09T19:33:26.835893787Z" level=info msg="CreateContainer within sandbox \"c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3d720dc8545338015c041f5451ec72e2b3ae80cba9e2764562ef7ef2abe012e9\"" Feb 9 19:33:26.837028 env[1148]: time="2024-02-09T19:33:26.836989616Z" level=info msg="StartContainer for \"3d720dc8545338015c041f5451ec72e2b3ae80cba9e2764562ef7ef2abe012e9\"" Feb 9 19:33:26.866811 systemd[1]: Started cri-containerd-3d720dc8545338015c041f5451ec72e2b3ae80cba9e2764562ef7ef2abe012e9.scope. Feb 9 19:33:26.922753 env[1148]: time="2024-02-09T19:33:26.922232994Z" level=info msg="StartContainer for \"3d720dc8545338015c041f5451ec72e2b3ae80cba9e2764562ef7ef2abe012e9\" returns successfully" Feb 9 19:33:26.936166 systemd[1]: cri-containerd-3d720dc8545338015c041f5451ec72e2b3ae80cba9e2764562ef7ef2abe012e9.scope: Deactivated successfully. Feb 9 19:33:26.984974 env[1148]: time="2024-02-09T19:33:26.984813880Z" level=info msg="shim disconnected" id=3d720dc8545338015c041f5451ec72e2b3ae80cba9e2764562ef7ef2abe012e9 Feb 9 19:33:26.985583 env[1148]: time="2024-02-09T19:33:26.985529460Z" level=warning msg="cleaning up after shim disconnected" id=3d720dc8545338015c041f5451ec72e2b3ae80cba9e2764562ef7ef2abe012e9 namespace=k8s.io Feb 9 19:33:26.985804 env[1148]: time="2024-02-09T19:33:26.985775605Z" level=info msg="cleaning up dead shim" Feb 9 19:33:27.009038 env[1148]: time="2024-02-09T19:33:27.008977248Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:33:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3369 runtime=io.containerd.runc.v2\n" Feb 9 19:33:27.398929 kubelet[1500]: W0209 19:33:27.398758 1500 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6098ecda_7681_4e7e_9ae5_5526d4d18877.slice/cri-containerd-53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226.scope WatchSource:0}: container "53e11e787b20d2c926763589530c019af064796a0a7a5de8a80dc21f941d9226" in namespace "k8s.io": not found Feb 9 19:33:27.489054 kubelet[1500]: E0209 19:33:27.489002 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:27.819723 env[1148]: time="2024-02-09T19:33:27.819667914Z" level=info msg="CreateContainer within sandbox \"c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:33:27.842418 env[1148]: time="2024-02-09T19:33:27.842367570Z" level=info msg="CreateContainer within sandbox \"c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6c6e9d6f6b2e3a360cef7164493743e445edfe343186168d4c0714aa946fc6de\"" Feb 9 19:33:27.843543 env[1148]: time="2024-02-09T19:33:27.843482986Z" level=info msg="StartContainer for \"6c6e9d6f6b2e3a360cef7164493743e445edfe343186168d4c0714aa946fc6de\"" Feb 9 19:33:27.885525 systemd[1]: Started cri-containerd-6c6e9d6f6b2e3a360cef7164493743e445edfe343186168d4c0714aa946fc6de.scope. Feb 9 19:33:27.924076 env[1148]: time="2024-02-09T19:33:27.924026054Z" level=info msg="StartContainer for \"6c6e9d6f6b2e3a360cef7164493743e445edfe343186168d4c0714aa946fc6de\" returns successfully" Feb 9 19:33:27.925777 systemd[1]: cri-containerd-6c6e9d6f6b2e3a360cef7164493743e445edfe343186168d4c0714aa946fc6de.scope: Deactivated successfully. Feb 9 19:33:27.962855 env[1148]: time="2024-02-09T19:33:27.962770551Z" level=info msg="shim disconnected" id=6c6e9d6f6b2e3a360cef7164493743e445edfe343186168d4c0714aa946fc6de Feb 9 19:33:27.962855 env[1148]: time="2024-02-09T19:33:27.962850565Z" level=warning msg="cleaning up after shim disconnected" id=6c6e9d6f6b2e3a360cef7164493743e445edfe343186168d4c0714aa946fc6de namespace=k8s.io Feb 9 19:33:27.963230 env[1148]: time="2024-02-09T19:33:27.962865530Z" level=info msg="cleaning up dead shim" Feb 9 19:33:27.973661 env[1148]: time="2024-02-09T19:33:27.973590450Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:33:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3427 runtime=io.containerd.runc.v2\n" Feb 9 19:33:28.413799 systemd[1]: run-containerd-runc-k8s.io-6c6e9d6f6b2e3a360cef7164493743e445edfe343186168d4c0714aa946fc6de-runc.1gFMeq.mount: Deactivated successfully. Feb 9 19:33:28.413945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c6e9d6f6b2e3a360cef7164493743e445edfe343186168d4c0714aa946fc6de-rootfs.mount: Deactivated successfully. Feb 9 19:33:28.490042 kubelet[1500]: E0209 19:33:28.489967 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:28.574681 kubelet[1500]: E0209 19:33:28.574632 1500 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:33:28.831307 env[1148]: time="2024-02-09T19:33:28.831247719Z" level=info msg="CreateContainer within sandbox \"c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:33:28.849280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733060460.mount: Deactivated successfully. Feb 9 19:33:28.854292 kubelet[1500]: I0209 19:33:28.854241 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-4zzzp" podStartSLOduration=3.567417592 podCreationTimestamp="2024-02-09 19:33:23 +0000 UTC" firstStartedPulling="2024-02-09 19:33:24.431406874 +0000 UTC m=+66.836262687" lastFinishedPulling="2024-02-09 19:33:26.715905818 +0000 UTC m=+69.120761631" observedRunningTime="2024-02-09 19:33:27.853489987 +0000 UTC m=+70.258345804" watchObservedRunningTime="2024-02-09 19:33:28.851916536 +0000 UTC m=+71.256772360" Feb 9 19:33:28.859600 env[1148]: time="2024-02-09T19:33:28.859539060Z" level=info msg="CreateContainer within sandbox \"c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"37ab885715fa34fa051ec0f816298e851942db6ea344bd28c8ba7396f2994f93\"" Feb 9 19:33:28.860483 env[1148]: time="2024-02-09T19:33:28.860443979Z" level=info msg="StartContainer for \"37ab885715fa34fa051ec0f816298e851942db6ea344bd28c8ba7396f2994f93\"" Feb 9 19:33:28.890329 systemd[1]: Started cri-containerd-37ab885715fa34fa051ec0f816298e851942db6ea344bd28c8ba7396f2994f93.scope. Feb 9 19:33:28.929425 systemd[1]: cri-containerd-37ab885715fa34fa051ec0f816298e851942db6ea344bd28c8ba7396f2994f93.scope: Deactivated successfully. Feb 9 19:33:28.933086 env[1148]: time="2024-02-09T19:33:28.932377335Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod404f19c1_7223_4803_9923_2b48b7c6224e.slice/cri-containerd-37ab885715fa34fa051ec0f816298e851942db6ea344bd28c8ba7396f2994f93.scope/memory.events\": no such file or directory" Feb 9 19:33:28.936303 env[1148]: time="2024-02-09T19:33:28.936255386Z" level=info msg="StartContainer for \"37ab885715fa34fa051ec0f816298e851942db6ea344bd28c8ba7396f2994f93\" returns successfully" Feb 9 19:33:28.967734 env[1148]: time="2024-02-09T19:33:28.967644796Z" level=info msg="shim disconnected" id=37ab885715fa34fa051ec0f816298e851942db6ea344bd28c8ba7396f2994f93 Feb 9 19:33:28.967734 env[1148]: time="2024-02-09T19:33:28.967723496Z" level=warning msg="cleaning up after shim disconnected" id=37ab885715fa34fa051ec0f816298e851942db6ea344bd28c8ba7396f2994f93 namespace=k8s.io Feb 9 19:33:28.967734 env[1148]: time="2024-02-09T19:33:28.967738294Z" level=info msg="cleaning up dead shim" Feb 9 19:33:28.979713 env[1148]: time="2024-02-09T19:33:28.979647367Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:33:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3486 runtime=io.containerd.runc.v2\n" Feb 9 19:33:29.413863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37ab885715fa34fa051ec0f816298e851942db6ea344bd28c8ba7396f2994f93-rootfs.mount: Deactivated successfully. Feb 9 19:33:29.490938 kubelet[1500]: E0209 19:33:29.490863 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:29.837986 env[1148]: time="2024-02-09T19:33:29.837930676Z" level=info msg="CreateContainer within sandbox \"c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:33:29.859789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3933667861.mount: Deactivated successfully. Feb 9 19:33:29.869408 env[1148]: time="2024-02-09T19:33:29.869350791Z" level=info msg="CreateContainer within sandbox \"c130f5252db43351bf076803a2be61ba130083b47c13f1a18e7d234d22d137e4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f417fb1ee1627a13cf3358341b4d50165027c548a9998f99b88242a940a65935\"" Feb 9 19:33:29.870314 env[1148]: time="2024-02-09T19:33:29.870264458Z" level=info msg="StartContainer for \"f417fb1ee1627a13cf3358341b4d50165027c548a9998f99b88242a940a65935\"" Feb 9 19:33:29.904036 systemd[1]: Started cri-containerd-f417fb1ee1627a13cf3358341b4d50165027c548a9998f99b88242a940a65935.scope. Feb 9 19:33:29.957568 env[1148]: time="2024-02-09T19:33:29.957500514Z" level=info msg="StartContainer for \"f417fb1ee1627a13cf3358341b4d50165027c548a9998f99b88242a940a65935\" returns successfully" Feb 9 19:33:30.380211 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:33:30.491594 kubelet[1500]: E0209 19:33:30.491544 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:30.516594 kubelet[1500]: W0209 19:33:30.516507 1500 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod404f19c1_7223_4803_9923_2b48b7c6224e.slice/cri-containerd-f214201db2d7beaa01c1b8cf64f3779aeefc4f8c59a699a2ea0951dfc9d1043c.scope WatchSource:0}: task f214201db2d7beaa01c1b8cf64f3779aeefc4f8c59a699a2ea0951dfc9d1043c not found: not found Feb 9 19:33:30.871495 kubelet[1500]: I0209 19:33:30.871337 1500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-88jlc" podStartSLOduration=5.8710907169999995 podCreationTimestamp="2024-02-09 19:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:33:30.868308754 +0000 UTC m=+73.273164576" watchObservedRunningTime="2024-02-09 19:33:30.871090717 +0000 UTC m=+73.275946531" Feb 9 19:33:31.337342 kubelet[1500]: I0209 19:33:31.336411 1500 setters.go:548] "Node became not ready" node="10.128.0.33" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:33:31.336344868 +0000 UTC m=+73.741200682 LastTransitionTime:2024-02-09 19:33:31.336344868 +0000 UTC m=+73.741200682 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:33:31.492026 kubelet[1500]: E0209 19:33:31.491965 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:32.493029 kubelet[1500]: E0209 19:33:32.492981 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:33.080137 systemd[1]: run-containerd-runc-k8s.io-f417fb1ee1627a13cf3358341b4d50165027c548a9998f99b88242a940a65935-runc.ixk5E5.mount: Deactivated successfully. Feb 9 19:33:33.267820 systemd-networkd[1029]: lxc_health: Link UP Feb 9 19:33:33.284209 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:33:33.288428 systemd-networkd[1029]: lxc_health: Gained carrier Feb 9 19:33:33.493738 kubelet[1500]: E0209 19:33:33.493656 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:33.636215 kubelet[1500]: W0209 19:33:33.634630 1500 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod404f19c1_7223_4803_9923_2b48b7c6224e.slice/cri-containerd-3d720dc8545338015c041f5451ec72e2b3ae80cba9e2764562ef7ef2abe012e9.scope WatchSource:0}: task 3d720dc8545338015c041f5451ec72e2b3ae80cba9e2764562ef7ef2abe012e9 not found: not found Feb 9 19:33:34.464368 systemd-networkd[1029]: lxc_health: Gained IPv6LL Feb 9 19:33:34.494518 kubelet[1500]: E0209 19:33:34.494469 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:35.495762 kubelet[1500]: E0209 19:33:35.495712 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:36.496263 kubelet[1500]: E0209 19:33:36.496205 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:36.749325 kubelet[1500]: W0209 19:33:36.748879 1500 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod404f19c1_7223_4803_9923_2b48b7c6224e.slice/cri-containerd-6c6e9d6f6b2e3a360cef7164493743e445edfe343186168d4c0714aa946fc6de.scope WatchSource:0}: task 6c6e9d6f6b2e3a360cef7164493743e445edfe343186168d4c0714aa946fc6de not found: not found Feb 9 19:33:37.497195 kubelet[1500]: E0209 19:33:37.497132 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:38.438626 kubelet[1500]: E0209 19:33:38.438582 1500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:38.498469 kubelet[1500]: E0209 19:33:38.498428 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:39.500223 kubelet[1500]: E0209 19:33:39.500153 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:39.858355 kubelet[1500]: W0209 19:33:39.857872 1500 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod404f19c1_7223_4803_9923_2b48b7c6224e.slice/cri-containerd-37ab885715fa34fa051ec0f816298e851942db6ea344bd28c8ba7396f2994f93.scope WatchSource:0}: task 37ab885715fa34fa051ec0f816298e851942db6ea344bd28c8ba7396f2994f93 not found: not found Feb 9 19:33:39.965645 systemd[1]: run-containerd-runc-k8s.io-f417fb1ee1627a13cf3358341b4d50165027c548a9998f99b88242a940a65935-runc.WtZizA.mount: Deactivated successfully. Feb 9 19:33:40.501320 kubelet[1500]: E0209 19:33:40.501208 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:41.502005 kubelet[1500]: E0209 19:33:41.501935 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:42.502253 kubelet[1500]: E0209 19:33:42.502173 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:43.503101 kubelet[1500]: E0209 19:33:43.503031 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:33:44.504152 kubelet[1500]: E0209 19:33:44.504085 1500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"