Apr 12 19:01:34.191684 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Apr 12 17:19:00 -00 2024 Apr 12 19:01:34.191728 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 19:01:34.191746 kernel: BIOS-provided physical RAM map: Apr 12 19:01:34.191760 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 12 19:01:34.191773 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 12 19:01:34.191786 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 12 19:01:34.191841 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 12 19:01:34.191855 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 12 19:01:34.191868 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 12 19:01:34.191881 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Apr 12 19:01:34.191895 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 12 19:01:34.191909 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 12 19:01:34.191921 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 12 19:01:34.191935 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 12 19:01:34.191964 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 12 19:01:34.191978 kernel: NX (Execute Disable) protection: active Apr 12 19:01:34.191992 kernel: efi: EFI v2.70 by EDK II Apr 12 19:01:34.192008 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbe36f198 RNG=0xbfb73018 TPMEventLog=0xbe2b3018 Apr 12 19:01:34.192022 kernel: random: crng init done Apr 12 19:01:34.192036 kernel: SMBIOS 2.4 present. Apr 12 19:01:34.192049 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 Apr 12 19:01:34.192063 kernel: Hypervisor detected: KVM Apr 12 19:01:34.192082 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 12 19:01:34.192097 kernel: kvm-clock: cpu 0, msr 7e191001, primary cpu clock Apr 12 19:01:34.192112 kernel: kvm-clock: using sched offset of 13318239695 cycles Apr 12 19:01:34.192129 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 12 19:01:34.192144 kernel: tsc: Detected 2299.998 MHz processor Apr 12 19:01:34.192160 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 12 19:01:34.192175 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 12 19:01:34.192190 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 12 19:01:34.192204 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 12 19:01:34.192218 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 12 19:01:34.192237 kernel: Using GB pages for direct mapping Apr 12 19:01:34.192251 kernel: Secure boot disabled Apr 12 19:01:34.192266 kernel: ACPI: Early table checksum verification disabled Apr 12 19:01:34.192280 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 12 19:01:34.192295 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 12 19:01:34.192309 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 12 19:01:34.192323 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 12 19:01:34.192338 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 12 19:01:34.192364 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Apr 12 19:01:34.192379 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 12 19:01:34.192395 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 12 19:01:34.192410 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 12 19:01:34.192426 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 12 19:01:34.192442 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 12 19:01:34.192460 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 12 19:01:34.192476 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 12 19:01:34.192492 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 12 19:01:34.192507 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 12 19:01:34.192523 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 12 19:01:34.192538 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 12 19:01:34.192554 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 12 19:01:34.192569 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 12 19:01:34.192584 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 12 19:01:34.192604 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 12 19:01:34.192620 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 12 19:01:34.192635 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 12 19:01:34.192650 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 12 19:01:34.192666 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 12 19:01:34.192682 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 12 19:01:34.192698 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 12 19:01:34.192713 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Apr 12 19:01:34.192729 kernel: Zone ranges: Apr 12 19:01:34.192749 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 12 19:01:34.192765 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 12 19:01:34.192780 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 12 19:01:34.192796 kernel: Movable zone start for each node Apr 12 19:01:34.200283 kernel: Early memory node ranges Apr 12 19:01:34.200307 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 12 19:01:34.200325 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 12 19:01:34.200342 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 12 19:01:34.200359 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 12 19:01:34.200383 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 12 19:01:34.200400 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 12 19:01:34.200417 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 12 19:01:34.200435 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 12 19:01:34.200451 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 12 19:01:34.200469 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 12 19:01:34.200486 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 12 19:01:34.200503 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 12 19:01:34.200520 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 12 19:01:34.200542 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 12 19:01:34.200559 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 12 19:01:34.200575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 12 19:01:34.200592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 12 19:01:34.200609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 12 19:01:34.200626 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 12 19:01:34.200643 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 12 19:01:34.200659 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 12 19:01:34.200676 kernel: Booting paravirtualized kernel on KVM Apr 12 19:01:34.200697 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 12 19:01:34.200714 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Apr 12 19:01:34.200731 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Apr 12 19:01:34.200747 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Apr 12 19:01:34.200763 kernel: pcpu-alloc: [0] 0 1 Apr 12 19:01:34.200780 kernel: kvm-guest: PV spinlocks enabled Apr 12 19:01:34.200810 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 12 19:01:34.200837 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1931256 Apr 12 19:01:34.200854 kernel: Policy zone: Normal Apr 12 19:01:34.200878 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 19:01:34.200895 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 19:01:34.200910 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 12 19:01:34.200927 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 19:01:34.200943 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 19:01:34.200969 kernel: Memory: 7534424K/7860584K available (12294K kernel code, 2275K rwdata, 13708K rodata, 47440K init, 4148K bss, 325900K reserved, 0K cma-reserved) Apr 12 19:01:34.200986 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 12 19:01:34.201004 kernel: Kernel/User page tables isolation: enabled Apr 12 19:01:34.201024 kernel: ftrace: allocating 34508 entries in 135 pages Apr 12 19:01:34.201041 kernel: ftrace: allocated 135 pages with 4 groups Apr 12 19:01:34.201058 kernel: rcu: Hierarchical RCU implementation. Apr 12 19:01:34.201077 kernel: rcu: RCU event tracing is enabled. Apr 12 19:01:34.201094 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 12 19:01:34.201111 kernel: Rude variant of Tasks RCU enabled. Apr 12 19:01:34.201128 kernel: Tracing variant of Tasks RCU enabled. Apr 12 19:01:34.201145 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 19:01:34.201162 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 12 19:01:34.201184 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 12 19:01:34.201215 kernel: Console: colour dummy device 80x25 Apr 12 19:01:34.201233 kernel: printk: console [ttyS0] enabled Apr 12 19:01:34.201255 kernel: ACPI: Core revision 20210730 Apr 12 19:01:34.201272 kernel: APIC: Switch to symmetric I/O mode setup Apr 12 19:01:34.201290 kernel: x2apic enabled Apr 12 19:01:34.201308 kernel: Switched APIC routing to physical x2apic. Apr 12 19:01:34.201325 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 12 19:01:34.201344 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 12 19:01:34.201362 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 12 19:01:34.201384 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 12 19:01:34.201402 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 12 19:01:34.201420 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 12 19:01:34.201438 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 12 19:01:34.201455 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 12 19:01:34.201474 kernel: Spectre V2 : Mitigation: IBRS Apr 12 19:01:34.201496 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 12 19:01:34.201514 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 12 19:01:34.201531 kernel: RETBleed: Mitigation: IBRS Apr 12 19:01:34.201549 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 12 19:01:34.201567 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Apr 12 19:01:34.201585 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Apr 12 19:01:34.201609 kernel: MDS: Mitigation: Clear CPU buffers Apr 12 19:01:34.201627 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 12 19:01:34.201645 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 12 19:01:34.201666 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 12 19:01:34.201684 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 12 19:01:34.201701 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 12 19:01:34.201719 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 12 19:01:34.201737 kernel: Freeing SMP alternatives memory: 32K Apr 12 19:01:34.201755 kernel: pid_max: default: 32768 minimum: 301 Apr 12 19:01:34.201773 kernel: LSM: Security Framework initializing Apr 12 19:01:34.201791 kernel: SELinux: Initializing. Apr 12 19:01:34.215891 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 12 19:01:34.215926 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 12 19:01:34.215954 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 12 19:01:34.215973 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 12 19:01:34.215991 kernel: signal: max sigframe size: 1776 Apr 12 19:01:34.216008 kernel: rcu: Hierarchical SRCU implementation. Apr 12 19:01:34.216026 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 12 19:01:34.216049 kernel: smp: Bringing up secondary CPUs ... Apr 12 19:01:34.216068 kernel: x86: Booting SMP configuration: Apr 12 19:01:34.216084 kernel: .... node #0, CPUs: #1 Apr 12 19:01:34.216108 kernel: kvm-clock: cpu 1, msr 7e191041, secondary cpu clock Apr 12 19:01:34.216134 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 12 19:01:34.216155 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 12 19:01:34.216171 kernel: smp: Brought up 1 node, 2 CPUs Apr 12 19:01:34.216189 kernel: smpboot: Max logical packages: 1 Apr 12 19:01:34.216206 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 12 19:01:34.216233 kernel: devtmpfs: initialized Apr 12 19:01:34.216251 kernel: x86/mm: Memory block size: 128MB Apr 12 19:01:34.216268 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 12 19:01:34.216298 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 19:01:34.216318 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 12 19:01:34.216334 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 19:01:34.216351 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 19:01:34.216377 kernel: audit: initializing netlink subsys (disabled) Apr 12 19:01:34.216396 kernel: audit: type=2000 audit(1712948492.869:1): state=initialized audit_enabled=0 res=1 Apr 12 19:01:34.216412 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 19:01:34.216439 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 12 19:01:34.216468 kernel: cpuidle: using governor menu Apr 12 19:01:34.216485 kernel: ACPI: bus type PCI registered Apr 12 19:01:34.216510 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 19:01:34.216528 kernel: dca service started, version 1.12.1 Apr 12 19:01:34.216545 kernel: PCI: Using configuration type 1 for base access Apr 12 19:01:34.216563 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 12 19:01:34.216581 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 19:01:34.216599 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 19:01:34.216617 kernel: ACPI: Added _OSI(Module Device) Apr 12 19:01:34.216634 kernel: ACPI: Added _OSI(Processor Device) Apr 12 19:01:34.216657 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 19:01:34.216675 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 19:01:34.216693 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 19:01:34.216711 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 19:01:34.216728 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 19:01:34.216745 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 12 19:01:34.216763 kernel: ACPI: Interpreter enabled Apr 12 19:01:34.216780 kernel: ACPI: PM: (supports S0 S3 S5) Apr 12 19:01:34.216823 kernel: ACPI: Using IOAPIC for interrupt routing Apr 12 19:01:34.216846 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 12 19:01:34.216864 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 12 19:01:34.216882 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 12 19:01:34.217164 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 12 19:01:34.217334 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Apr 12 19:01:34.217357 kernel: PCI host bridge to bus 0000:00 Apr 12 19:01:34.217525 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 12 19:01:34.217677 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 12 19:01:34.217833 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 12 19:01:34.217975 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 12 19:01:34.218116 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 12 19:01:34.218299 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 12 19:01:34.218479 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 12 19:01:34.218655 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 12 19:01:34.218829 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 12 19:01:34.218998 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 12 19:01:34.219156 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 12 19:01:34.219310 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 12 19:01:34.219509 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 12 19:01:34.219668 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 12 19:01:34.219843 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 12 19:01:34.220009 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 12 19:01:34.220166 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 12 19:01:34.220317 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 12 19:01:34.220338 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 12 19:01:34.220356 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 12 19:01:34.220374 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 12 19:01:34.220396 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 12 19:01:34.220414 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 12 19:01:34.220431 kernel: iommu: Default domain type: Translated Apr 12 19:01:34.220459 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 12 19:01:34.220476 kernel: vgaarb: loaded Apr 12 19:01:34.220494 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 19:01:34.220512 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 19:01:34.220530 kernel: PTP clock support registered Apr 12 19:01:34.220547 kernel: Registered efivars operations Apr 12 19:01:34.220568 kernel: PCI: Using ACPI for IRQ routing Apr 12 19:01:34.220585 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 12 19:01:34.220602 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 12 19:01:34.220619 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 12 19:01:34.220635 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 12 19:01:34.220653 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 12 19:01:34.220670 kernel: clocksource: Switched to clocksource kvm-clock Apr 12 19:01:34.220688 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 19:01:34.220706 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 19:01:34.220727 kernel: pnp: PnP ACPI init Apr 12 19:01:34.220744 kernel: pnp: PnP ACPI: found 7 devices Apr 12 19:01:34.220761 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 12 19:01:34.220778 kernel: NET: Registered PF_INET protocol family Apr 12 19:01:34.222816 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 12 19:01:34.222866 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 12 19:01:34.222885 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 19:01:34.222905 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 19:01:34.222923 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Apr 12 19:01:34.222949 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 12 19:01:34.222968 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 12 19:01:34.222986 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 12 19:01:34.223004 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 19:01:34.223023 kernel: NET: Registered PF_XDP protocol family Apr 12 19:01:34.223223 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 12 19:01:34.223368 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 12 19:01:34.223516 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 12 19:01:34.223654 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 12 19:01:34.223853 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 12 19:01:34.223877 kernel: PCI: CLS 0 bytes, default 64 Apr 12 19:01:34.223894 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 12 19:01:34.223911 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Apr 12 19:01:34.223928 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 12 19:01:34.223946 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 12 19:01:34.223962 kernel: clocksource: Switched to clocksource tsc Apr 12 19:01:34.223984 kernel: Initialise system trusted keyrings Apr 12 19:01:34.224002 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 12 19:01:34.224018 kernel: Key type asymmetric registered Apr 12 19:01:34.224035 kernel: Asymmetric key parser 'x509' registered Apr 12 19:01:34.224051 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 19:01:34.224068 kernel: io scheduler mq-deadline registered Apr 12 19:01:34.224086 kernel: io scheduler kyber registered Apr 12 19:01:34.224103 kernel: io scheduler bfq registered Apr 12 19:01:34.224121 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 12 19:01:34.224143 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 12 19:01:34.224328 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 12 19:01:34.224352 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 12 19:01:34.224520 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 12 19:01:34.224542 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 12 19:01:34.224702 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 12 19:01:34.224724 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 19:01:34.224743 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 12 19:01:34.224761 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 12 19:01:34.224785 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 12 19:01:34.224823 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 12 19:01:34.224996 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 12 19:01:34.225020 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 12 19:01:34.225054 kernel: i8042: Warning: Keylock active Apr 12 19:01:34.225069 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 12 19:01:34.225082 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 12 19:01:34.225271 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 12 19:01:34.225430 kernel: rtc_cmos 00:00: registered as rtc0 Apr 12 19:01:34.225585 kernel: rtc_cmos 00:00: setting system clock to 2024-04-12T19:01:33 UTC (1712948493) Apr 12 19:01:34.225726 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 12 19:01:34.225747 kernel: intel_pstate: CPU model not supported Apr 12 19:01:34.225764 kernel: pstore: Registered efi as persistent store backend Apr 12 19:01:34.225780 kernel: NET: Registered PF_INET6 protocol family Apr 12 19:01:34.225812 kernel: Segment Routing with IPv6 Apr 12 19:01:34.225829 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 19:01:34.225851 kernel: NET: Registered PF_PACKET protocol family Apr 12 19:01:34.225868 kernel: Key type dns_resolver registered Apr 12 19:01:34.225885 kernel: IPI shorthand broadcast: enabled Apr 12 19:01:34.225901 kernel: sched_clock: Marking stable (775703108, 128861749)->(934344263, -29779406) Apr 12 19:01:34.225917 kernel: registered taskstats version 1 Apr 12 19:01:34.225934 kernel: Loading compiled-in X.509 certificates Apr 12 19:01:34.225950 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 12 19:01:34.225967 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 1fa140a38fc6bd27c8b56127e4d1eb4f665c7ec4' Apr 12 19:01:34.225983 kernel: Key type .fscrypt registered Apr 12 19:01:34.226005 kernel: Key type fscrypt-provisioning registered Apr 12 19:01:34.226023 kernel: pstore: Using crash dump compression: deflate Apr 12 19:01:34.226039 kernel: ima: Allocated hash algorithm: sha1 Apr 12 19:01:34.226055 kernel: ima: No architecture policies found Apr 12 19:01:34.226071 kernel: Freeing unused kernel image (initmem) memory: 47440K Apr 12 19:01:34.226085 kernel: Write protecting the kernel read-only data: 28672k Apr 12 19:01:34.226102 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Apr 12 19:01:34.226118 kernel: Freeing unused kernel image (rodata/data gap) memory: 628K Apr 12 19:01:34.226139 kernel: Run /init as init process Apr 12 19:01:34.226153 kernel: with arguments: Apr 12 19:01:34.226167 kernel: /init Apr 12 19:01:34.226182 kernel: with environment: Apr 12 19:01:34.226196 kernel: HOME=/ Apr 12 19:01:34.226211 kernel: TERM=linux Apr 12 19:01:34.226225 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 19:01:34.226245 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 19:01:34.226267 systemd[1]: Detected virtualization kvm. Apr 12 19:01:34.226284 systemd[1]: Detected architecture x86-64. Apr 12 19:01:34.226299 systemd[1]: Running in initrd. Apr 12 19:01:34.226314 systemd[1]: No hostname configured, using default hostname. Apr 12 19:01:34.226329 systemd[1]: Hostname set to . Apr 12 19:01:34.226346 systemd[1]: Initializing machine ID from VM UUID. Apr 12 19:01:34.226361 systemd[1]: Queued start job for default target initrd.target. Apr 12 19:01:34.226378 systemd[1]: Started systemd-ask-password-console.path. Apr 12 19:01:34.226398 systemd[1]: Reached target cryptsetup.target. Apr 12 19:01:34.226414 systemd[1]: Reached target paths.target. Apr 12 19:01:34.226429 systemd[1]: Reached target slices.target. Apr 12 19:01:34.226455 systemd[1]: Reached target swap.target. Apr 12 19:01:34.226471 systemd[1]: Reached target timers.target. Apr 12 19:01:34.226491 systemd[1]: Listening on iscsid.socket. Apr 12 19:01:34.226507 systemd[1]: Listening on iscsiuio.socket. Apr 12 19:01:34.226528 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 19:01:34.226544 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 19:01:34.226562 systemd[1]: Listening on systemd-journald.socket. Apr 12 19:01:34.226579 systemd[1]: Listening on systemd-networkd.socket. Apr 12 19:01:34.226597 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 19:01:34.226613 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 19:01:34.226630 systemd[1]: Reached target sockets.target. Apr 12 19:01:34.226647 systemd[1]: Starting kmod-static-nodes.service... Apr 12 19:01:34.226665 systemd[1]: Finished network-cleanup.service. Apr 12 19:01:34.226688 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 19:01:34.226703 systemd[1]: Starting systemd-journald.service... Apr 12 19:01:34.226720 systemd[1]: Starting systemd-modules-load.service... Apr 12 19:01:34.226758 systemd[1]: Starting systemd-resolved.service... Apr 12 19:01:34.226781 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 19:01:34.236183 systemd[1]: Finished kmod-static-nodes.service. Apr 12 19:01:34.236365 kernel: audit: type=1130 audit(1712948494.191:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.236396 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 19:01:34.236546 kernel: audit: type=1130 audit(1712948494.202:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.236566 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 19:01:34.236585 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 19:01:34.236604 kernel: audit: type=1130 audit(1712948494.223:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.236748 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 19:01:34.236776 systemd-journald[189]: Journal started Apr 12 19:01:34.237031 systemd-journald[189]: Runtime Journal (/run/log/journal/2f7c258c52625efc476cf83262f12b12) is 8.0M, max 148.8M, 140.8M free. Apr 12 19:01:34.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.226936 systemd-modules-load[190]: Inserted module 'overlay' Apr 12 19:01:34.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.254489 systemd[1]: Started systemd-journald.service. Apr 12 19:01:34.254562 kernel: audit: type=1130 audit(1712948494.247:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.255250 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 19:01:34.256707 systemd-resolved[191]: Positive Trust Anchors: Apr 12 19:01:34.256731 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 19:01:34.256787 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 19:01:34.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.262819 kernel: audit: type=1130 audit(1712948494.254:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.274242 systemd-resolved[191]: Defaulting to hostname 'linux'. Apr 12 19:01:34.276045 systemd[1]: Started systemd-resolved.service. Apr 12 19:01:34.294208 kernel: audit: type=1130 audit(1712948494.274:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.276399 systemd[1]: Reached target nss-lookup.target. Apr 12 19:01:34.307001 kernel: audit: type=1130 audit(1712948494.293:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.290985 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 19:01:34.302789 systemd[1]: Starting dracut-cmdline.service... Apr 12 19:01:34.316833 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 19:01:34.323872 dracut-cmdline[205]: dracut-dracut-053 Apr 12 19:01:34.327928 kernel: Bridge firewalling registered Apr 12 19:01:34.326686 systemd-modules-load[190]: Inserted module 'br_netfilter' Apr 12 19:01:34.331934 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 19:01:34.361831 kernel: SCSI subsystem initialized Apr 12 19:01:34.381341 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 19:01:34.381438 kernel: device-mapper: uevent: version 1.0.3 Apr 12 19:01:34.381465 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 19:01:34.388283 systemd-modules-load[190]: Inserted module 'dm_multipath' Apr 12 19:01:34.389697 systemd[1]: Finished systemd-modules-load.service. Apr 12 19:01:34.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.405879 kernel: audit: type=1130 audit(1712948494.399:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.404728 systemd[1]: Starting systemd-sysctl.service... Apr 12 19:01:34.420187 systemd[1]: Finished systemd-sysctl.service. Apr 12 19:01:34.430988 kernel: audit: type=1130 audit(1712948494.422:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.440837 kernel: Loading iSCSI transport class v2.0-870. Apr 12 19:01:34.461855 kernel: iscsi: registered transport (tcp) Apr 12 19:01:34.490872 kernel: iscsi: registered transport (qla4xxx) Apr 12 19:01:34.490991 kernel: QLogic iSCSI HBA Driver Apr 12 19:01:34.539021 systemd[1]: Finished dracut-cmdline.service. Apr 12 19:01:34.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.540779 systemd[1]: Starting dracut-pre-udev.service... Apr 12 19:01:34.600856 kernel: raid6: avx2x4 gen() 22968 MB/s Apr 12 19:01:34.618839 kernel: raid6: avx2x4 xor() 6408 MB/s Apr 12 19:01:34.635840 kernel: raid6: avx2x2 gen() 23685 MB/s Apr 12 19:01:34.653840 kernel: raid6: avx2x2 xor() 18653 MB/s Apr 12 19:01:34.670846 kernel: raid6: avx2x1 gen() 20885 MB/s Apr 12 19:01:34.688841 kernel: raid6: avx2x1 xor() 16079 MB/s Apr 12 19:01:34.705837 kernel: raid6: sse2x4 gen() 10240 MB/s Apr 12 19:01:34.722842 kernel: raid6: sse2x4 xor() 6163 MB/s Apr 12 19:01:34.740836 kernel: raid6: sse2x2 gen() 10896 MB/s Apr 12 19:01:34.757839 kernel: raid6: sse2x2 xor() 7397 MB/s Apr 12 19:01:34.774839 kernel: raid6: sse2x1 gen() 9685 MB/s Apr 12 19:01:34.793249 kernel: raid6: sse2x1 xor() 5171 MB/s Apr 12 19:01:34.793287 kernel: raid6: using algorithm avx2x2 gen() 23685 MB/s Apr 12 19:01:34.793327 kernel: raid6: .... xor() 18653 MB/s, rmw enabled Apr 12 19:01:34.794017 kernel: raid6: using avx2x2 recovery algorithm Apr 12 19:01:34.809846 kernel: xor: automatically using best checksumming function avx Apr 12 19:01:34.925837 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Apr 12 19:01:34.938439 systemd[1]: Finished dracut-pre-udev.service. Apr 12 19:01:34.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.946000 audit: BPF prog-id=7 op=LOAD Apr 12 19:01:34.947000 audit: BPF prog-id=8 op=LOAD Apr 12 19:01:34.949670 systemd[1]: Starting systemd-udevd.service... Apr 12 19:01:34.968558 systemd-udevd[388]: Using default interface naming scheme 'v252'. Apr 12 19:01:34.976112 systemd[1]: Started systemd-udevd.service. Apr 12 19:01:34.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:34.995492 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 19:01:35.012080 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Apr 12 19:01:35.053740 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 19:01:35.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:35.055062 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 19:01:35.124094 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 19:01:35.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:35.203834 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 19:01:35.254873 kernel: scsi host0: Virtio SCSI HBA Apr 12 19:01:35.262034 kernel: AVX2 version of gcm_enc/dec engaged. Apr 12 19:01:35.294680 kernel: AES CTR mode by8 optimization enabled Apr 12 19:01:35.320089 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 12 19:01:35.392590 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Apr 12 19:01:35.393018 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 12 19:01:35.393222 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 12 19:01:35.397621 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 12 19:01:35.397942 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 12 19:01:35.424824 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 19:01:35.424937 kernel: GPT:17805311 != 25165823 Apr 12 19:01:35.424961 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 19:01:35.430913 kernel: GPT:17805311 != 25165823 Apr 12 19:01:35.434613 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 19:01:35.445190 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 19:01:35.452530 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 12 19:01:35.499252 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 19:01:35.532121 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (441) Apr 12 19:01:35.526140 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 19:01:35.541964 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 19:01:35.575940 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 19:01:35.591815 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 19:01:35.607238 systemd[1]: Starting disk-uuid.service... Apr 12 19:01:35.633121 disk-uuid[512]: Primary Header is updated. Apr 12 19:01:35.633121 disk-uuid[512]: Secondary Entries is updated. Apr 12 19:01:35.633121 disk-uuid[512]: Secondary Header is updated. Apr 12 19:01:35.661918 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 19:01:35.668827 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 19:01:35.693861 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 19:01:36.685247 disk-uuid[513]: The operation has completed successfully. Apr 12 19:01:36.694077 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 19:01:36.752958 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 19:01:36.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:36.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:36.753091 systemd[1]: Finished disk-uuid.service. Apr 12 19:01:36.770138 systemd[1]: Starting verity-setup.service... Apr 12 19:01:36.799385 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 12 19:01:36.875639 systemd[1]: Found device dev-mapper-usr.device. Apr 12 19:01:36.877090 systemd[1]: Mounting sysusr-usr.mount... Apr 12 19:01:36.895444 systemd[1]: Finished verity-setup.service. Apr 12 19:01:36.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:36.981738 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 19:01:36.981645 systemd[1]: Mounted sysusr-usr.mount. Apr 12 19:01:36.989262 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 19:01:37.036969 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 19:01:37.037001 kernel: BTRFS info (device sda6): using free space tree Apr 12 19:01:37.037016 kernel: BTRFS info (device sda6): has skinny extents Apr 12 19:01:36.990255 systemd[1]: Starting ignition-setup.service... Apr 12 19:01:37.049988 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 12 19:01:37.005147 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 19:01:37.062596 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 19:01:37.071589 systemd[1]: Finished ignition-setup.service. Apr 12 19:01:37.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.093227 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 19:01:37.121141 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 19:01:37.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.129000 audit: BPF prog-id=9 op=LOAD Apr 12 19:01:37.132140 systemd[1]: Starting systemd-networkd.service... Apr 12 19:01:37.165086 systemd-networkd[687]: lo: Link UP Apr 12 19:01:37.165101 systemd-networkd[687]: lo: Gained carrier Apr 12 19:01:37.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.166255 systemd-networkd[687]: Enumeration completed Apr 12 19:01:37.166393 systemd[1]: Started systemd-networkd.service. Apr 12 19:01:37.166864 systemd-networkd[687]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 19:01:37.169154 systemd-networkd[687]: eth0: Link UP Apr 12 19:01:37.169161 systemd-networkd[687]: eth0: Gained carrier Apr 12 19:01:37.180182 systemd[1]: Reached target network.target. Apr 12 19:01:37.181904 systemd-networkd[687]: eth0: DHCPv4 address 10.128.0.35/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 12 19:01:37.188689 systemd[1]: Starting iscsiuio.service... Apr 12 19:01:37.268083 systemd[1]: Started iscsiuio.service. Apr 12 19:01:37.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.276591 systemd[1]: Starting iscsid.service... Apr 12 19:01:37.288924 iscsid[696]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 19:01:37.288924 iscsid[696]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Apr 12 19:01:37.288924 iscsid[696]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 19:01:37.288924 iscsid[696]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 19:01:37.288924 iscsid[696]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 19:01:37.288924 iscsid[696]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 19:01:37.288924 iscsid[696]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 19:01:37.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.333166 systemd[1]: Started iscsid.service. Apr 12 19:01:37.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.378374 ignition[665]: Ignition 2.14.0 Apr 12 19:01:37.362216 systemd[1]: Starting dracut-initqueue.service... Apr 12 19:01:37.378389 ignition[665]: Stage: fetch-offline Apr 12 19:01:37.400470 systemd[1]: Finished dracut-initqueue.service. Apr 12 19:01:37.378467 ignition[665]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 19:01:37.409434 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 19:01:37.378509 ignition[665]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 19:01:37.429246 systemd[1]: Reached target remote-fs-pre.target. Apr 12 19:01:37.399410 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 19:01:37.445038 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 19:01:37.399616 ignition[665]: parsed url from cmdline: "" Apr 12 19:01:37.460025 systemd[1]: Reached target remote-fs.target. Apr 12 19:01:37.399624 ignition[665]: no config URL provided Apr 12 19:01:37.469126 systemd[1]: Starting dracut-pre-mount.service... Apr 12 19:01:37.399634 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 19:01:37.488065 systemd[1]: Starting ignition-fetch.service... Apr 12 19:01:37.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.399647 ignition[665]: no config at "/usr/lib/ignition/user.ign" Apr 12 19:01:37.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.560131 unknown[711]: fetched base config from "system" Apr 12 19:01:37.399658 ignition[665]: failed to fetch config: resource requires networking Apr 12 19:01:37.560145 unknown[711]: fetched base config from "system" Apr 12 19:01:37.400036 ignition[665]: Ignition finished successfully Apr 12 19:01:37.560155 unknown[711]: fetched user config from "gcp" Apr 12 19:01:37.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.499017 ignition[711]: Ignition 2.14.0 Apr 12 19:01:37.564608 systemd[1]: Finished dracut-pre-mount.service. Apr 12 19:01:37.499027 ignition[711]: Stage: fetch Apr 12 19:01:37.587436 systemd[1]: Finished ignition-fetch.service. Apr 12 19:01:37.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.499154 ignition[711]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 19:01:37.604004 systemd[1]: Starting ignition-kargs.service... Apr 12 19:01:37.499184 ignition[711]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 19:01:37.641440 systemd[1]: Finished ignition-kargs.service. Apr 12 19:01:37.506200 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 19:01:37.657553 systemd[1]: Starting ignition-disks.service... Apr 12 19:01:37.506381 ignition[711]: parsed url from cmdline: "" Apr 12 19:01:37.682268 systemd[1]: Finished ignition-disks.service. Apr 12 19:01:37.506387 ignition[711]: no config URL provided Apr 12 19:01:37.686316 systemd[1]: Reached target initrd-root-device.target. Apr 12 19:01:37.506394 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 19:01:37.708070 systemd[1]: Reached target local-fs-pre.target. Apr 12 19:01:37.506404 ignition[711]: no config at "/usr/lib/ignition/user.ign" Apr 12 19:01:37.725088 systemd[1]: Reached target local-fs.target. Apr 12 19:01:37.506441 ignition[711]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 12 19:01:37.725194 systemd[1]: Reached target sysinit.target. Apr 12 19:01:37.515543 ignition[711]: GET result: OK Apr 12 19:01:37.747062 systemd[1]: Reached target basic.target. Apr 12 19:01:37.515719 ignition[711]: parsing config with SHA512: 0517f7ee11871cbe2b473241cff4757a9915567100f392cff4b1a9a04b4783ee81568afa33cc442f90e3070d8d06b402fdf5213781854bb40083567af3dc91e8 Apr 12 19:01:37.761606 systemd[1]: Starting systemd-fsck-root.service... Apr 12 19:01:37.561020 ignition[711]: fetch: fetch complete Apr 12 19:01:37.561499 ignition[711]: fetch: fetch passed Apr 12 19:01:37.561547 ignition[711]: Ignition finished successfully Apr 12 19:01:37.619629 ignition[717]: Ignition 2.14.0 Apr 12 19:01:37.619638 ignition[717]: Stage: kargs Apr 12 19:01:37.619881 ignition[717]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 19:01:37.619917 ignition[717]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 19:01:37.629662 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 19:01:37.631780 ignition[717]: kargs: kargs passed Apr 12 19:01:37.631873 ignition[717]: Ignition finished successfully Apr 12 19:01:37.671170 ignition[723]: Ignition 2.14.0 Apr 12 19:01:37.671183 ignition[723]: Stage: disks Apr 12 19:01:37.671352 ignition[723]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 19:01:37.671389 ignition[723]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 19:01:37.679184 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 19:01:37.681058 ignition[723]: disks: disks passed Apr 12 19:01:37.681119 ignition[723]: Ignition finished successfully Apr 12 19:01:37.796989 systemd-fsck[731]: ROOT: clean, 612/1628000 files, 124056/1617920 blocks Apr 12 19:01:37.946102 systemd[1]: Finished systemd-fsck-root.service. Apr 12 19:01:37.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:37.947762 systemd[1]: Mounting sysroot.mount... Apr 12 19:01:37.983008 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 19:01:37.977943 systemd[1]: Mounted sysroot.mount. Apr 12 19:01:37.991389 systemd[1]: Reached target initrd-root-fs.target. Apr 12 19:01:38.010108 systemd[1]: Mounting sysroot-usr.mount... Apr 12 19:01:38.025552 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Apr 12 19:01:38.025647 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 19:01:38.025700 systemd[1]: Reached target ignition-diskful.target. Apr 12 19:01:38.105958 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (737) Apr 12 19:01:38.106024 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 19:01:38.106045 kernel: BTRFS info (device sda6): using free space tree Apr 12 19:01:38.106066 kernel: BTRFS info (device sda6): has skinny extents Apr 12 19:01:38.027384 systemd[1]: Mounted sysroot-usr.mount. Apr 12 19:01:38.057877 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 19:01:38.134004 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 12 19:01:38.068725 systemd[1]: Starting initrd-setup-root.service... Apr 12 19:01:38.143075 initrd-setup-root[742]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 19:01:38.131273 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 19:01:38.163005 initrd-setup-root[766]: cut: /sysroot/etc/group: No such file or directory Apr 12 19:01:38.181975 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 19:01:38.191969 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 19:01:38.210450 systemd[1]: Finished initrd-setup-root.service. Apr 12 19:01:38.250041 kernel: kauditd_printk_skb: 23 callbacks suppressed Apr 12 19:01:38.250098 kernel: audit: type=1130 audit(1712948498.208:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:38.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:38.212134 systemd[1]: Starting ignition-mount.service... Apr 12 19:01:38.258371 systemd[1]: Starting sysroot-boot.service... Apr 12 19:01:38.272986 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Apr 12 19:01:38.273134 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Apr 12 19:01:38.295988 ignition[803]: INFO : Ignition 2.14.0 Apr 12 19:01:38.295988 ignition[803]: INFO : Stage: mount Apr 12 19:01:38.295988 ignition[803]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 19:01:38.295988 ignition[803]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 19:01:38.349134 kernel: audit: type=1130 audit(1712948498.310:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:38.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:38.307688 systemd[1]: Finished sysroot-boot.service. Apr 12 19:01:38.401057 kernel: audit: type=1130 audit(1712948498.372:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:38.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:38.401178 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 19:01:38.401178 ignition[803]: INFO : mount: mount passed Apr 12 19:01:38.401178 ignition[803]: INFO : Ignition finished successfully Apr 12 19:01:38.474000 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (812) Apr 12 19:01:38.474046 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 19:01:38.474071 kernel: BTRFS info (device sda6): using free space tree Apr 12 19:01:38.474093 kernel: BTRFS info (device sda6): has skinny extents Apr 12 19:01:38.474116 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 12 19:01:38.314364 systemd[1]: Finished ignition-mount.service. Apr 12 19:01:38.376183 systemd[1]: Starting ignition-files.service... Apr 12 19:01:38.412476 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 19:01:38.506044 ignition[831]: INFO : Ignition 2.14.0 Apr 12 19:01:38.506044 ignition[831]: INFO : Stage: files Apr 12 19:01:38.506044 ignition[831]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 19:01:38.506044 ignition[831]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 19:01:38.473710 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 19:01:38.560006 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 19:01:38.560006 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Apr 12 19:01:38.560006 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 19:01:38.560006 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 19:01:38.560006 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 19:01:38.560006 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 19:01:38.560006 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 19:01:38.560006 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 19:01:38.560006 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Apr 12 19:01:38.534511 unknown[831]: wrote ssh authorized keys file for user: core Apr 12 19:01:38.829893 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 19:01:39.104894 ignition[831]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Apr 12 19:01:39.129082 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 19:01:39.129082 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 19:01:39.129082 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 12 19:01:39.180015 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 19:01:39.192960 systemd-networkd[687]: eth0: Gained IPv6LL Apr 12 19:01:39.329894 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 19:01:39.356994 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (831) Apr 12 19:01:39.357061 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Apr 12 19:01:39.357061 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 19:01:39.357061 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem881627084" Apr 12 19:01:39.357061 ignition[831]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem881627084": device or resource busy Apr 12 19:01:39.357061 ignition[831]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem881627084", trying btrfs: device or resource busy Apr 12 19:01:39.357061 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem881627084" Apr 12 19:01:39.357061 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem881627084" Apr 12 19:01:39.357061 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem881627084" Apr 12 19:01:39.492021 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem881627084" Apr 12 19:01:39.492021 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Apr 12 19:01:39.492021 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 19:01:39.492021 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Apr 12 19:01:39.358756 systemd[1]: mnt-oem881627084.mount: Deactivated successfully. Apr 12 19:01:39.574142 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Apr 12 19:01:39.693206 ignition[831]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Apr 12 19:01:39.717985 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 19:01:39.717985 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 19:01:39.717985 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Apr 12 19:01:39.769976 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 12 19:01:40.069509 ignition[831]: DEBUG : files: createFilesystemsFiles: createFiles: op(a): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Apr 12 19:01:40.093038 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 19:01:40.093038 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Apr 12 19:01:40.093038 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 19:01:40.093038 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3295143004" Apr 12 19:01:40.093038 ignition[831]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3295143004": device or resource busy Apr 12 19:01:40.093038 ignition[831]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3295143004", trying btrfs: device or resource busy Apr 12 19:01:40.093038 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3295143004" Apr 12 19:01:40.093038 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3295143004" Apr 12 19:01:40.093038 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3295143004" Apr 12 19:01:40.093038 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3295143004" Apr 12 19:01:40.093038 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Apr 12 19:01:40.093038 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 19:01:40.093038 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Apr 12 19:01:40.085602 systemd[1]: mnt-oem3295143004.mount: Deactivated successfully. Apr 12 19:01:40.325118 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Apr 12 19:01:40.359600 ignition[831]: DEBUG : files: createFilesystemsFiles: createFiles: op(f): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Apr 12 19:01:40.384009 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 19:01:40.384009 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 19:01:40.384009 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Apr 12 19:01:40.384009 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET result: OK Apr 12 19:01:40.878857 ignition[831]: DEBUG : files: createFilesystemsFiles: createFiles: op(10): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Apr 12 19:01:40.904003 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 19:01:40.904003 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 19:01:40.904003 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 19:01:40.904003 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 19:01:40.904003 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 12 19:01:41.108652 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET result: OK Apr 12 19:01:41.204643 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 19:01:41.204643 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/home/core/install.sh" Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3564140542" Apr 12 19:01:41.234930 ignition[831]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3564140542": device or resource busy Apr 12 19:01:41.234930 ignition[831]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3564140542", trying btrfs: device or resource busy Apr 12 19:01:41.234930 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3564140542" Apr 12 19:01:41.672069 kernel: audit: type=1130 audit(1712948501.248:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.672134 kernel: audit: type=1130 audit(1712948501.332:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.672163 kernel: audit: type=1130 audit(1712948501.397:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.672203 kernel: audit: type=1131 audit(1712948501.397:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.672223 kernel: audit: type=1130 audit(1712948501.538:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.672238 kernel: audit: type=1131 audit(1712948501.538:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.223558 systemd[1]: mnt-oem3564140542.mount: Deactivated successfully. Apr 12 19:01:41.709064 kernel: audit: type=1130 audit(1712948501.678:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3564140542" Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem3564140542" Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem3564140542" Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(1c): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(1c): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1595956790" Apr 12 19:01:41.709253 ignition[831]: CRITICAL : files: createFilesystemsFiles: createFiles: op(1c): op(1d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1595956790": device or resource busy Apr 12 19:01:41.709253 ignition[831]: ERROR : files: createFilesystemsFiles: createFiles: op(1c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1595956790", trying btrfs: device or resource busy Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1595956790" Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1595956790" Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1f): [started] unmounting "/mnt/oem1595956790" Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1f): [finished] unmounting "/mnt/oem1595956790" Apr 12 19:01:41.709253 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(1c): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Apr 12 19:01:41.709253 ignition[831]: INFO : files: op(20): [started] processing unit "coreos-metadata-sshkeys@.service" Apr 12 19:01:41.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.247992 systemd[1]: Finished ignition-files.service. Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(20): [finished] processing unit "coreos-metadata-sshkeys@.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(21): [started] processing unit "oem-gce.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(21): [finished] processing unit "oem-gce.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(22): [started] processing unit "oem-gce-enable-oslogin.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(22): [finished] processing unit "oem-gce-enable-oslogin.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(23): [started] processing unit "prepare-cni-plugins.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(23): op(24): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(23): op(24): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(23): [finished] processing unit "prepare-cni-plugins.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(25): [started] processing unit "prepare-critools.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(25): op(26): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(25): op(26): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(25): [finished] processing unit "prepare-critools.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(27): [started] processing unit "prepare-helm.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(27): op(28): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(27): op(28): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(27): [finished] processing unit "prepare-helm.service" Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(29): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(29): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Apr 12 19:01:42.046281 ignition[831]: INFO : files: op(2a): [started] setting preset to enabled for "oem-gce.service" Apr 12 19:01:42.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.260409 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 19:01:42.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.455135 initrd-setup-root-after-ignition[854]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 19:01:42.477295 ignition[831]: INFO : files: op(2a): [finished] setting preset to enabled for "oem-gce.service" Apr 12 19:01:42.477295 ignition[831]: INFO : files: op(2b): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Apr 12 19:01:42.477295 ignition[831]: INFO : files: op(2b): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Apr 12 19:01:42.477295 ignition[831]: INFO : files: op(2c): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 19:01:42.477295 ignition[831]: INFO : files: op(2c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 19:01:42.477295 ignition[831]: INFO : files: op(2d): [started] setting preset to enabled for "prepare-critools.service" Apr 12 19:01:42.477295 ignition[831]: INFO : files: op(2d): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 19:01:42.477295 ignition[831]: INFO : files: op(2e): [started] setting preset to enabled for "prepare-helm.service" Apr 12 19:01:42.477295 ignition[831]: INFO : files: op(2e): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 19:01:42.477295 ignition[831]: INFO : files: createResultFile: createFiles: op(2f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 19:01:42.477295 ignition[831]: INFO : files: createResultFile: createFiles: op(2f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 19:01:42.477295 ignition[831]: INFO : files: files passed Apr 12 19:01:42.477295 ignition[831]: INFO : Ignition finished successfully Apr 12 19:01:42.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.291155 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 19:01:42.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.292261 systemd[1]: Starting ignition-quench.service... Apr 12 19:01:42.763000 audit: BPF prog-id=6 op=UNLOAD Apr 12 19:01:41.313497 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 19:01:41.334561 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 19:01:42.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.334730 systemd[1]: Finished ignition-quench.service. Apr 12 19:01:42.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.399242 systemd[1]: Reached target ignition-complete.target. Apr 12 19:01:42.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.492271 systemd[1]: Starting initrd-parse-etc.service... Apr 12 19:01:42.858151 ignition[869]: INFO : Ignition 2.14.0 Apr 12 19:01:42.858151 ignition[869]: INFO : Stage: umount Apr 12 19:01:42.858151 ignition[869]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 19:01:42.858151 ignition[869]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 19:01:42.858151 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 19:01:42.858151 ignition[869]: INFO : umount: umount passed Apr 12 19:01:42.858151 ignition[869]: INFO : Ignition finished successfully Apr 12 19:01:42.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:42.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.528882 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 19:01:42.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.529035 systemd[1]: Finished initrd-parse-etc.service. Apr 12 19:01:41.540149 systemd[1]: Reached target initrd-fs.target. Apr 12 19:01:41.597212 systemd[1]: Reached target initrd.target. Apr 12 19:01:43.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.622299 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 19:01:43.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.623637 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 19:01:43.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:43.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:41.649446 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 19:01:41.682294 systemd[1]: Starting initrd-cleanup.service... Apr 12 19:01:41.724607 systemd[1]: Stopped target nss-lookup.target. Apr 12 19:01:41.741294 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 19:01:41.764375 systemd[1]: Stopped target timers.target. Apr 12 19:01:41.786314 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 19:01:41.786561 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 19:01:43.128005 systemd-journald[189]: Received SIGTERM from PID 1 (n/a). Apr 12 19:01:43.128086 iscsid[696]: iscsid shutting down. Apr 12 19:01:41.827639 systemd[1]: Stopped target initrd.target. Apr 12 19:01:41.866362 systemd[1]: Stopped target basic.target. Apr 12 19:01:41.893427 systemd[1]: Stopped target ignition-complete.target. Apr 12 19:01:41.938366 systemd[1]: Stopped target ignition-diskful.target. Apr 12 19:01:41.955399 systemd[1]: Stopped target initrd-root-device.target. Apr 12 19:01:41.976488 systemd[1]: Stopped target remote-fs.target. Apr 12 19:01:41.997428 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 19:01:42.039367 systemd[1]: Stopped target sysinit.target. Apr 12 19:01:42.054407 systemd[1]: Stopped target local-fs.target. Apr 12 19:01:42.067415 systemd[1]: Stopped target local-fs-pre.target. Apr 12 19:01:42.084458 systemd[1]: Stopped target swap.target. Apr 12 19:01:42.101354 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 19:01:42.101640 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 19:01:42.119594 systemd[1]: Stopped target cryptsetup.target. Apr 12 19:01:42.137370 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 19:01:42.137599 systemd[1]: Stopped dracut-initqueue.service. Apr 12 19:01:42.155638 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 19:01:42.155876 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 19:01:42.205503 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 19:01:42.205708 systemd[1]: Stopped ignition-files.service. Apr 12 19:01:42.220159 systemd[1]: Stopping ignition-mount.service... Apr 12 19:01:42.236186 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 19:01:42.236456 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 19:01:42.260072 systemd[1]: Stopping sysroot-boot.service... Apr 12 19:01:42.282221 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 19:01:42.282569 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 19:01:42.307418 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 19:01:42.307646 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 19:01:42.329997 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 19:01:42.331037 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 19:01:42.331162 systemd[1]: Stopped ignition-mount.service. Apr 12 19:01:42.351756 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 19:01:42.351921 systemd[1]: Stopped sysroot-boot.service. Apr 12 19:01:42.385871 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 19:01:42.386072 systemd[1]: Stopped ignition-disks.service. Apr 12 19:01:42.399391 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 19:01:42.399472 systemd[1]: Stopped ignition-kargs.service. Apr 12 19:01:42.419353 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 12 19:01:42.419444 systemd[1]: Stopped ignition-fetch.service. Apr 12 19:01:42.445279 systemd[1]: Stopped target network.target. Apr 12 19:01:42.463132 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 19:01:42.463247 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 19:01:42.486176 systemd[1]: Stopped target paths.target. Apr 12 19:01:42.506046 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 19:01:42.509974 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 19:01:42.528063 systemd[1]: Stopped target slices.target. Apr 12 19:01:42.548034 systemd[1]: Stopped target sockets.target. Apr 12 19:01:42.568134 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 19:01:42.568206 systemd[1]: Closed iscsid.socket. Apr 12 19:01:42.588141 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 19:01:42.588209 systemd[1]: Closed iscsiuio.socket. Apr 12 19:01:42.608100 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 19:01:42.608211 systemd[1]: Stopped ignition-setup.service. Apr 12 19:01:42.629186 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 19:01:42.629277 systemd[1]: Stopped initrd-setup-root.service. Apr 12 19:01:42.650445 systemd[1]: Stopping systemd-networkd.service... Apr 12 19:01:42.653922 systemd-networkd[687]: eth0: DHCPv6 lease lost Apr 12 19:01:43.137000 audit: BPF prog-id=9 op=UNLOAD Apr 12 19:01:42.664349 systemd[1]: Stopping systemd-resolved.service... Apr 12 19:01:42.692626 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 19:01:42.692779 systemd[1]: Stopped systemd-resolved.service. Apr 12 19:01:42.716885 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 19:01:42.717030 systemd[1]: Stopped systemd-networkd.service. Apr 12 19:01:42.731842 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 19:01:42.731990 systemd[1]: Finished initrd-cleanup.service. Apr 12 19:01:42.749594 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 19:01:42.749657 systemd[1]: Closed systemd-networkd.socket. Apr 12 19:01:42.774365 systemd[1]: Stopping network-cleanup.service... Apr 12 19:01:42.787008 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 19:01:42.787164 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 19:01:42.804205 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 19:01:42.804296 systemd[1]: Stopped systemd-sysctl.service. Apr 12 19:01:42.820282 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 19:01:42.820368 systemd[1]: Stopped systemd-modules-load.service. Apr 12 19:01:42.835342 systemd[1]: Stopping systemd-udevd.service... Apr 12 19:01:42.851881 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 19:01:42.852747 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 19:01:42.853051 systemd[1]: Stopped systemd-udevd.service. Apr 12 19:01:42.868246 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 19:01:42.868326 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 19:01:42.881049 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 19:01:42.881115 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 19:01:42.888191 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 19:01:42.888273 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 19:01:42.930371 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 19:01:42.930466 systemd[1]: Stopped dracut-cmdline.service. Apr 12 19:01:42.955237 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 19:01:42.955325 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 19:01:42.972740 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 19:01:42.995079 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 19:01:42.995200 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 19:01:43.010978 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 19:01:43.011134 systemd[1]: Stopped network-cleanup.service. Apr 12 19:01:43.025780 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 19:01:43.025943 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 19:01:43.044512 systemd[1]: Reached target initrd-switch-root.target. Apr 12 19:01:43.061603 systemd[1]: Starting initrd-switch-root.service... Apr 12 19:01:43.089900 systemd[1]: Switching root. Apr 12 19:01:43.140961 systemd-journald[189]: Journal stopped Apr 12 19:01:47.966240 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 19:01:47.966381 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 19:01:47.966413 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 19:01:47.966446 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 19:01:47.966486 kernel: SELinux: policy capability open_perms=1 Apr 12 19:01:47.966520 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 19:01:47.966550 kernel: SELinux: policy capability always_check_network=0 Apr 12 19:01:47.966573 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 19:01:47.966596 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 19:01:47.966618 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 19:01:47.966645 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 19:01:47.966669 kernel: kauditd_printk_skb: 33 callbacks suppressed Apr 12 19:01:47.966699 kernel: audit: type=1403 audit(1712948503.457:77): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 19:01:47.966732 systemd[1]: Successfully loaded SELinux policy in 122.494ms. Apr 12 19:01:47.966774 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.553ms. Apr 12 19:01:47.966822 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 19:01:47.966849 systemd[1]: Detected virtualization kvm. Apr 12 19:01:47.966889 systemd[1]: Detected architecture x86-64. Apr 12 19:01:47.966913 systemd[1]: Detected first boot. Apr 12 19:01:47.966937 systemd[1]: Initializing machine ID from VM UUID. Apr 12 19:01:47.966960 kernel: audit: type=1400 audit(1712948503.624:78): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 19:01:47.966983 kernel: audit: type=1400 audit(1712948503.625:79): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 19:01:47.967005 kernel: audit: type=1334 audit(1712948503.645:80): prog-id=10 op=LOAD Apr 12 19:01:47.967027 kernel: audit: type=1334 audit(1712948503.645:81): prog-id=10 op=UNLOAD Apr 12 19:01:47.967048 kernel: audit: type=1334 audit(1712948503.666:82): prog-id=11 op=LOAD Apr 12 19:01:47.967075 kernel: audit: type=1334 audit(1712948503.666:83): prog-id=11 op=UNLOAD Apr 12 19:01:47.967097 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 19:01:47.967121 kernel: audit: type=1400 audit(1712948503.841:84): avc: denied { associate } for pid=902 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 19:01:47.967147 kernel: audit: type=1300 audit(1712948503.841:84): arch=c000003e syscall=188 success=yes exit=0 a0=c0001878e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 19:01:47.967171 kernel: audit: type=1327 audit(1712948503.841:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 19:01:47.967196 systemd[1]: Populated /etc with preset unit settings. Apr 12 19:01:47.967220 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 19:01:47.967249 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 19:01:47.967275 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 19:01:47.967298 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 19:01:47.967322 systemd[1]: Stopped iscsiuio.service. Apr 12 19:01:47.967345 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 19:01:47.967370 systemd[1]: Stopped iscsid.service. Apr 12 19:01:47.967395 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 19:01:47.967419 systemd[1]: Stopped initrd-switch-root.service. Apr 12 19:01:47.967448 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 19:01:47.967473 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 19:01:47.967503 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 19:01:47.967527 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Apr 12 19:01:47.967550 systemd[1]: Created slice system-getty.slice. Apr 12 19:01:47.967574 systemd[1]: Created slice system-modprobe.slice. Apr 12 19:01:47.967599 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 19:01:47.967622 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 19:01:47.967651 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 19:01:47.967676 systemd[1]: Created slice user.slice. Apr 12 19:01:47.967700 systemd[1]: Started systemd-ask-password-console.path. Apr 12 19:01:47.967725 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 19:01:47.967748 systemd[1]: Set up automount boot.automount. Apr 12 19:01:47.967773 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 19:01:47.967825 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 19:01:47.967854 systemd[1]: Stopped target initrd-fs.target. Apr 12 19:01:47.967887 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 19:01:47.967916 systemd[1]: Reached target integritysetup.target. Apr 12 19:01:47.967941 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 19:01:47.967968 systemd[1]: Reached target remote-fs.target. Apr 12 19:01:47.967991 systemd[1]: Reached target slices.target. Apr 12 19:01:47.968016 systemd[1]: Reached target swap.target. Apr 12 19:01:47.968046 systemd[1]: Reached target torcx.target. Apr 12 19:01:47.968084 systemd[1]: Reached target veritysetup.target. Apr 12 19:01:47.968113 systemd[1]: Listening on systemd-coredump.socket. Apr 12 19:01:47.968137 systemd[1]: Listening on systemd-initctl.socket. Apr 12 19:01:47.968166 systemd[1]: Listening on systemd-networkd.socket. Apr 12 19:01:47.968190 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 19:01:47.968215 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 19:01:47.968239 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 19:01:47.968270 systemd[1]: Mounting dev-hugepages.mount... Apr 12 19:01:47.968293 systemd[1]: Mounting dev-mqueue.mount... Apr 12 19:01:47.968317 systemd[1]: Mounting media.mount... Apr 12 19:01:47.968341 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 19:01:47.968366 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 19:01:47.968394 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 19:01:47.968418 systemd[1]: Mounting tmp.mount... Apr 12 19:01:47.968443 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 19:01:47.968467 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 19:01:47.968492 systemd[1]: Starting kmod-static-nodes.service... Apr 12 19:01:47.968518 systemd[1]: Starting modprobe@configfs.service... Apr 12 19:01:47.968542 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 19:01:47.968567 systemd[1]: Starting modprobe@drm.service... Apr 12 19:01:47.968591 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 19:01:47.968619 systemd[1]: Starting modprobe@fuse.service... Apr 12 19:01:47.968643 systemd[1]: Starting modprobe@loop.service... Apr 12 19:01:47.968665 kernel: fuse: init (API version 7.34) Apr 12 19:01:47.968691 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 19:01:47.968715 kernel: loop: module loaded Apr 12 19:01:47.968739 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 19:01:47.968762 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 19:01:47.968785 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 19:01:47.968831 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 19:01:47.968859 systemd[1]: Stopped systemd-journald.service. Apr 12 19:01:47.968891 systemd[1]: Starting systemd-journald.service... Apr 12 19:01:47.968930 systemd[1]: Starting systemd-modules-load.service... Apr 12 19:01:47.968954 systemd[1]: Starting systemd-network-generator.service... Apr 12 19:01:47.968985 systemd-journald[993]: Journal started Apr 12 19:01:47.969091 systemd-journald[993]: Runtime Journal (/run/log/journal/2f7c258c52625efc476cf83262f12b12) is 8.0M, max 148.8M, 140.8M free. Apr 12 19:01:43.457000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 19:01:43.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 19:01:43.625000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 19:01:43.645000 audit: BPF prog-id=10 op=LOAD Apr 12 19:01:43.645000 audit: BPF prog-id=10 op=UNLOAD Apr 12 19:01:43.666000 audit: BPF prog-id=11 op=LOAD Apr 12 19:01:43.666000 audit: BPF prog-id=11 op=UNLOAD Apr 12 19:01:43.841000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 19:01:43.841000 audit[902]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 19:01:43.841000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 19:01:43.852000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 19:01:43.852000 audit[902]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879b9 a2=1ed a3=0 items=2 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 19:01:43.852000 audit: CWD cwd="/" Apr 12 19:01:43.852000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:43.852000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:43.852000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 19:01:47.095000 audit: BPF prog-id=12 op=LOAD Apr 12 19:01:47.095000 audit: BPF prog-id=3 op=UNLOAD Apr 12 19:01:47.095000 audit: BPF prog-id=13 op=LOAD Apr 12 19:01:47.095000 audit: BPF prog-id=14 op=LOAD Apr 12 19:01:47.095000 audit: BPF prog-id=4 op=UNLOAD Apr 12 19:01:47.095000 audit: BPF prog-id=5 op=UNLOAD Apr 12 19:01:47.097000 audit: BPF prog-id=15 op=LOAD Apr 12 19:01:47.097000 audit: BPF prog-id=12 op=UNLOAD Apr 12 19:01:47.097000 audit: BPF prog-id=16 op=LOAD Apr 12 19:01:47.097000 audit: BPF prog-id=17 op=LOAD Apr 12 19:01:47.097000 audit: BPF prog-id=13 op=UNLOAD Apr 12 19:01:47.097000 audit: BPF prog-id=14 op=UNLOAD Apr 12 19:01:47.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:47.107000 audit: BPF prog-id=15 op=UNLOAD Apr 12 19:01:47.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:47.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:47.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:47.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:47.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:47.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:47.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:47.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:47.920000 audit: BPF prog-id=18 op=LOAD Apr 12 19:01:47.920000 audit: BPF prog-id=19 op=LOAD Apr 12 19:01:47.920000 audit: BPF prog-id=20 op=LOAD Apr 12 19:01:47.920000 audit: BPF prog-id=16 op=UNLOAD Apr 12 19:01:47.920000 audit: BPF prog-id=17 op=UNLOAD Apr 12 19:01:47.962000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 19:01:47.962000 audit[993]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd71c16f30 a2=4000 a3=7ffd71c16fcc items=0 ppid=1 pid=993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 19:01:47.962000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 19:01:43.837558 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 19:01:47.094697 systemd[1]: Queued start job for default target multi-user.target. Apr 12 19:01:43.838653 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 19:01:47.100647 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 19:01:43.838677 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 19:01:43.838720 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 19:01:43.838733 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 19:01:43.838773 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 19:01:43.838789 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 19:01:43.839064 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 19:01:43.839111 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 19:01:43.839126 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 19:01:43.842005 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 19:01:43.842085 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 19:01:43.842120 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 19:01:43.842150 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 19:01:43.842182 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 19:01:43.842209 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:43Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 19:01:46.467294 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:46Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 19:01:46.467617 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:46Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 19:01:46.467764 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:46Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 19:01:46.468049 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:46Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 19:01:46.468111 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:46Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 19:01:46.468184 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-04-12T19:01:46Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 19:01:47.979921 systemd[1]: Starting systemd-remount-fs.service... Apr 12 19:01:47.994847 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 19:01:48.015310 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 19:01:48.015415 systemd[1]: Stopped verity-setup.service. Apr 12 19:01:48.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.034923 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 19:01:48.043855 systemd[1]: Started systemd-journald.service. Apr 12 19:01:48.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.053239 systemd[1]: Mounted dev-hugepages.mount. Apr 12 19:01:48.060151 systemd[1]: Mounted dev-mqueue.mount. Apr 12 19:01:48.067155 systemd[1]: Mounted media.mount. Apr 12 19:01:48.074124 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 19:01:48.083149 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 19:01:48.092108 systemd[1]: Mounted tmp.mount. Apr 12 19:01:48.100226 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 19:01:48.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.110321 systemd[1]: Finished kmod-static-nodes.service. Apr 12 19:01:48.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.119266 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 19:01:48.119497 systemd[1]: Finished modprobe@configfs.service. Apr 12 19:01:48.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.128335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 19:01:48.128563 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 19:01:48.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.137431 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 19:01:48.137649 systemd[1]: Finished modprobe@drm.service. Apr 12 19:01:48.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.146394 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 19:01:48.146603 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 19:01:48.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.155398 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 19:01:48.155606 systemd[1]: Finished modprobe@fuse.service. Apr 12 19:01:48.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.164296 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 19:01:48.164495 systemd[1]: Finished modprobe@loop.service. Apr 12 19:01:48.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.173316 systemd[1]: Finished systemd-modules-load.service. Apr 12 19:01:48.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.182246 systemd[1]: Finished systemd-network-generator.service. Apr 12 19:01:48.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.191347 systemd[1]: Finished systemd-remount-fs.service. Apr 12 19:01:48.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.200346 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 19:01:48.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.209772 systemd[1]: Reached target network-pre.target. Apr 12 19:01:48.219466 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 19:01:48.229344 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 19:01:48.236940 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 19:01:48.239887 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 19:01:48.248608 systemd[1]: Starting systemd-journal-flush.service... Apr 12 19:01:48.254856 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 19:01:48.256553 systemd[1]: Starting systemd-random-seed.service... Apr 12 19:01:48.263975 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 19:01:48.265676 systemd[1]: Starting systemd-sysctl.service... Apr 12 19:01:48.270366 systemd-journald[993]: Time spent on flushing to /var/log/journal/2f7c258c52625efc476cf83262f12b12 is 49.341ms for 1188 entries. Apr 12 19:01:48.270366 systemd-journald[993]: System Journal (/var/log/journal/2f7c258c52625efc476cf83262f12b12) is 8.0M, max 584.8M, 576.8M free. Apr 12 19:01:48.368133 systemd-journald[993]: Received client request to flush runtime journal. Apr 12 19:01:48.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.280643 systemd[1]: Starting systemd-sysusers.service... Apr 12 19:01:48.289611 systemd[1]: Starting systemd-udev-settle.service... Apr 12 19:01:48.370741 udevadm[1007]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 12 19:01:48.300312 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 19:01:48.310075 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 19:01:48.319256 systemd[1]: Finished systemd-random-seed.service. Apr 12 19:01:48.328379 systemd[1]: Finished systemd-sysctl.service. Apr 12 19:01:48.340555 systemd[1]: Reached target first-boot-complete.target. Apr 12 19:01:48.352985 systemd[1]: Finished systemd-sysusers.service. Apr 12 19:01:48.370305 systemd[1]: Finished systemd-journal-flush.service. Apr 12 19:01:48.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.964401 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 19:01:49.000785 kernel: kauditd_printk_skb: 60 callbacks suppressed Apr 12 19:01:49.000987 kernel: audit: type=1130 audit(1712948508.971:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:48.998000 audit: BPF prog-id=21 op=LOAD Apr 12 19:01:49.002358 systemd[1]: Starting systemd-udevd.service... Apr 12 19:01:48.999000 audit: BPF prog-id=22 op=LOAD Apr 12 19:01:48.999000 audit: BPF prog-id=7 op=UNLOAD Apr 12 19:01:48.999000 audit: BPF prog-id=8 op=UNLOAD Apr 12 19:01:49.015285 kernel: audit: type=1334 audit(1712948508.998:139): prog-id=21 op=LOAD Apr 12 19:01:49.015379 kernel: audit: type=1334 audit(1712948508.999:140): prog-id=22 op=LOAD Apr 12 19:01:49.015411 kernel: audit: type=1334 audit(1712948508.999:141): prog-id=7 op=UNLOAD Apr 12 19:01:49.015442 kernel: audit: type=1334 audit(1712948508.999:142): prog-id=8 op=UNLOAD Apr 12 19:01:49.052700 systemd-udevd[1011]: Using default interface naming scheme 'v252'. Apr 12 19:01:49.100473 systemd[1]: Started systemd-udevd.service. Apr 12 19:01:49.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:49.131885 kernel: audit: type=1130 audit(1712948509.107:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:49.139089 systemd[1]: Starting systemd-networkd.service... Apr 12 19:01:49.135000 audit: BPF prog-id=23 op=LOAD Apr 12 19:01:49.149864 kernel: audit: type=1334 audit(1712948509.135:144): prog-id=23 op=LOAD Apr 12 19:01:49.162000 audit: BPF prog-id=24 op=LOAD Apr 12 19:01:49.172833 kernel: audit: type=1334 audit(1712948509.162:145): prog-id=24 op=LOAD Apr 12 19:01:49.180878 kernel: audit: type=1334 audit(1712948509.170:146): prog-id=25 op=LOAD Apr 12 19:01:49.170000 audit: BPF prog-id=25 op=LOAD Apr 12 19:01:49.173645 systemd[1]: Starting systemd-userdbd.service... Apr 12 19:01:49.170000 audit: BPF prog-id=26 op=LOAD Apr 12 19:01:49.191865 kernel: audit: type=1334 audit(1712948509.170:147): prog-id=26 op=LOAD Apr 12 19:01:49.233298 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Apr 12 19:01:49.263121 systemd[1]: Started systemd-userdbd.service. Apr 12 19:01:49.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:49.341839 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 12 19:01:49.408545 systemd-networkd[1023]: lo: Link UP Apr 12 19:01:49.408565 systemd-networkd[1023]: lo: Gained carrier Apr 12 19:01:49.409470 systemd-networkd[1023]: Enumeration completed Apr 12 19:01:49.409650 systemd[1]: Started systemd-networkd.service. Apr 12 19:01:49.410085 systemd-networkd[1023]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 19:01:49.412824 systemd-networkd[1023]: eth0: Link UP Apr 12 19:01:49.412839 systemd-networkd[1023]: eth0: Gained carrier Apr 12 19:01:49.420826 kernel: ACPI: button: Power Button [PWRF] Apr 12 19:01:49.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:49.429706 systemd-networkd[1023]: eth0: DHCPv4 address 10.128.0.35/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 12 19:01:49.468838 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1041) Apr 12 19:01:49.420000 audit[1013]: AVC avc: denied { confidentiality } for pid=1013 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 19:01:49.420000 audit[1013]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c6435568e0 a1=32194 a2=7efc9fe60bc5 a3=5 items=108 ppid=1011 pid=1013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 19:01:49.420000 audit: CWD cwd="/" Apr 12 19:01:49.420000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=1 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=2 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=3 name=(null) inode=14437 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=4 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=5 name=(null) inode=14438 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=6 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=7 name=(null) inode=14439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=8 name=(null) inode=14439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=9 name=(null) inode=14440 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=10 name=(null) inode=14439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=11 name=(null) inode=14441 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=12 name=(null) inode=14439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=13 name=(null) inode=14442 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=14 name=(null) inode=14439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=15 name=(null) inode=14443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=16 name=(null) inode=14439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=17 name=(null) inode=14444 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=18 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=19 name=(null) inode=14445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=20 name=(null) inode=14445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=21 name=(null) inode=14446 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=22 name=(null) inode=14445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=23 name=(null) inode=14447 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=24 name=(null) inode=14445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=25 name=(null) inode=14448 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=26 name=(null) inode=14445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=27 name=(null) inode=14449 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=28 name=(null) inode=14445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=29 name=(null) inode=14450 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=30 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=31 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=32 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=33 name=(null) inode=14452 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=34 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=35 name=(null) inode=14453 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=36 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=37 name=(null) inode=14454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=38 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=39 name=(null) inode=14455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=40 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=41 name=(null) inode=14456 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=42 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=43 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=44 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=45 name=(null) inode=14458 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=46 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=47 name=(null) inode=14459 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=48 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=49 name=(null) inode=14460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=50 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=51 name=(null) inode=14461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=52 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=53 name=(null) inode=14462 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=55 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=56 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=57 name=(null) inode=14464 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=58 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=59 name=(null) inode=14465 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=60 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=61 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=62 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=63 name=(null) inode=14467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=64 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=65 name=(null) inode=14468 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=66 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=67 name=(null) inode=14469 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=68 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=69 name=(null) inode=14470 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=70 name=(null) inode=14466 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=71 name=(null) inode=14471 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=72 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=73 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=74 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=75 name=(null) inode=14473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=76 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=77 name=(null) inode=14474 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=78 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=79 name=(null) inode=14475 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=80 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=81 name=(null) inode=14476 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=82 name=(null) inode=14472 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=83 name=(null) inode=14477 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=84 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=85 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=86 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=87 name=(null) inode=14479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=88 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=89 name=(null) inode=14480 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=90 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=91 name=(null) inode=14481 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=92 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=93 name=(null) inode=14482 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=94 name=(null) inode=14478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=95 name=(null) inode=14483 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=96 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=97 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=98 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=99 name=(null) inode=14485 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=100 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=101 name=(null) inode=14486 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=102 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=103 name=(null) inode=14487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=104 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=105 name=(null) inode=14488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=106 name=(null) inode=14484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PATH item=107 name=(null) inode=14489 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 19:01:49.420000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 19:01:49.498835 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 12 19:01:49.499224 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 12 19:01:49.515926 kernel: EDAC MC: Ver: 3.0.0 Apr 12 19:01:49.545837 kernel: ACPI: button: Sleep Button [SLPF] Apr 12 19:01:49.545990 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 12 19:01:49.578917 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 19:01:49.588351 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 19:01:49.603423 systemd[1]: Finished systemd-udev-settle.service. Apr 12 19:01:49.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:49.614120 systemd[1]: Starting lvm2-activation-early.service... Apr 12 19:01:49.646457 lvm[1048]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 19:01:49.680745 systemd[1]: Finished lvm2-activation-early.service. Apr 12 19:01:49.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:49.690360 systemd[1]: Reached target cryptsetup.target. Apr 12 19:01:49.702925 systemd[1]: Starting lvm2-activation.service... Apr 12 19:01:49.709664 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 19:01:49.738998 systemd[1]: Finished lvm2-activation.service. Apr 12 19:01:49.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:49.748331 systemd[1]: Reached target local-fs-pre.target. Apr 12 19:01:49.757063 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 19:01:49.757133 systemd[1]: Reached target local-fs.target. Apr 12 19:01:49.766078 systemd[1]: Reached target machines.target. Apr 12 19:01:49.777078 systemd[1]: Starting ldconfig.service... Apr 12 19:01:49.785217 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 19:01:49.785365 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 19:01:49.787593 systemd[1]: Starting systemd-boot-update.service... Apr 12 19:01:49.797638 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 19:01:49.810663 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 19:01:49.811396 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 19:01:49.811554 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 19:01:49.814854 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 19:01:49.816005 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1051 (bootctl) Apr 12 19:01:49.821445 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 19:01:49.836871 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 19:01:49.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:49.873717 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 19:01:49.887649 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 19:01:49.901681 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 19:01:50.004645 systemd-fsck[1059]: fsck.fat 4.2 (2021-01-31) Apr 12 19:01:50.004645 systemd-fsck[1059]: /dev/sda1: 789 files, 119240/258078 clusters Apr 12 19:01:50.010207 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 19:01:50.022370 systemd[1]: Mounting boot.mount... Apr 12 19:01:50.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:50.058920 systemd[1]: Mounted boot.mount. Apr 12 19:01:50.089044 systemd[1]: Finished systemd-boot-update.service. Apr 12 19:01:50.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:50.216743 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 19:01:50.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:50.227020 systemd[1]: Starting audit-rules.service... Apr 12 19:01:50.235689 systemd[1]: Starting clean-ca-certificates.service... Apr 12 19:01:50.245764 systemd[1]: Starting oem-gce-enable-oslogin.service... Apr 12 19:01:50.255876 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 19:01:50.263000 audit: BPF prog-id=27 op=LOAD Apr 12 19:01:50.266711 systemd[1]: Starting systemd-resolved.service... Apr 12 19:01:50.273000 audit: BPF prog-id=28 op=LOAD Apr 12 19:01:50.276510 systemd[1]: Starting systemd-timesyncd.service... Apr 12 19:01:50.285809 systemd[1]: Starting systemd-update-utmp.service... Apr 12 19:01:50.292000 audit[1086]: SYSTEM_BOOT pid=1086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 19:01:50.294550 systemd[1]: Finished clean-ca-certificates.service. Apr 12 19:01:50.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:50.303442 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Apr 12 19:01:50.303703 systemd[1]: Finished oem-gce-enable-oslogin.service. Apr 12 19:01:50.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:50.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:50.315390 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 19:01:50.320053 systemd[1]: Finished systemd-update-utmp.service. Apr 12 19:01:50.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 19:01:50.424581 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 19:01:50.425689 augenrules[1094]: No rules Apr 12 19:01:50.423000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 19:01:50.423000 audit[1094]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc308410c0 a2=420 a3=0 items=0 ppid=1064 pid=1094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 19:01:50.423000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 19:01:50.429387 systemd-timesyncd[1081]: Contacted time server 169.254.169.254:123 (169.254.169.254). Apr 12 19:01:50.429483 systemd-timesyncd[1081]: Initial clock synchronization to Fri 2024-04-12 19:01:50.389880 UTC. Apr 12 19:01:50.433983 systemd-resolved[1078]: Positive Trust Anchors: Apr 12 19:01:50.434003 systemd-resolved[1078]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 19:01:50.434055 systemd-resolved[1078]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 19:01:50.435103 systemd[1]: Started systemd-timesyncd.service. Apr 12 19:01:50.444413 systemd[1]: Finished audit-rules.service. Apr 12 19:01:50.452515 systemd[1]: Reached target time-set.target. Apr 12 19:01:50.468362 systemd-resolved[1078]: Defaulting to hostname 'linux'. Apr 12 19:01:50.470988 systemd[1]: Started systemd-resolved.service. Apr 12 19:01:50.479026 systemd[1]: Reached target network.target. Apr 12 19:01:50.486959 systemd[1]: Reached target nss-lookup.target. Apr 12 19:01:50.664377 ldconfig[1050]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 19:01:50.777054 systemd-networkd[1023]: eth0: Gained IPv6LL Apr 12 19:01:50.872821 systemd[1]: Finished ldconfig.service. Apr 12 19:01:50.881937 systemd[1]: Starting systemd-update-done.service... Apr 12 19:01:50.891719 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 19:01:50.893270 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 19:01:50.902443 systemd[1]: Finished systemd-update-done.service. Apr 12 19:01:50.911206 systemd[1]: Reached target sysinit.target. Apr 12 19:01:50.920185 systemd[1]: Started motdgen.path. Apr 12 19:01:50.927091 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 19:01:50.937244 systemd[1]: Started logrotate.timer. Apr 12 19:01:50.944146 systemd[1]: Started mdadm.timer. Apr 12 19:01:50.950987 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 19:01:50.960018 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 19:01:50.960095 systemd[1]: Reached target paths.target. Apr 12 19:01:50.966999 systemd[1]: Reached target timers.target. Apr 12 19:01:50.974583 systemd[1]: Listening on dbus.socket. Apr 12 19:01:50.983733 systemd[1]: Starting docker.socket... Apr 12 19:01:50.995431 systemd[1]: Listening on sshd.socket. Apr 12 19:01:51.003199 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 19:01:51.004143 systemd[1]: Listening on docker.socket. Apr 12 19:01:51.011222 systemd[1]: Reached target sockets.target. Apr 12 19:01:51.020071 systemd[1]: Reached target basic.target. Apr 12 19:01:51.027098 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 19:01:51.027152 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 19:01:51.029138 systemd[1]: Starting containerd.service... Apr 12 19:01:51.037555 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Apr 12 19:01:51.049400 systemd[1]: Starting dbus.service... Apr 12 19:01:51.057776 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 19:01:51.067428 systemd[1]: Starting extend-filesystems.service... Apr 12 19:01:51.072935 jq[1106]: false Apr 12 19:01:51.075000 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 19:01:51.077305 systemd[1]: Starting motdgen.service... Apr 12 19:01:51.087284 systemd[1]: Starting oem-gce.service... Apr 12 19:01:51.098216 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 19:01:51.106764 systemd[1]: Starting prepare-critools.service... Apr 12 19:01:51.117006 systemd[1]: Starting prepare-helm.service... Apr 12 19:01:51.127735 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 19:01:51.140471 systemd[1]: Starting sshd-keygen.service... Apr 12 19:01:51.143404 extend-filesystems[1107]: Found sda Apr 12 19:01:51.143404 extend-filesystems[1107]: Found sda1 Apr 12 19:01:51.143404 extend-filesystems[1107]: Found sda2 Apr 12 19:01:51.143404 extend-filesystems[1107]: Found sda3 Apr 12 19:01:51.143404 extend-filesystems[1107]: Found usr Apr 12 19:01:51.143404 extend-filesystems[1107]: Found sda4 Apr 12 19:01:51.143404 extend-filesystems[1107]: Found sda6 Apr 12 19:01:51.143404 extend-filesystems[1107]: Found sda7 Apr 12 19:01:51.143404 extend-filesystems[1107]: Found sda9 Apr 12 19:01:51.143404 extend-filesystems[1107]: Checking size of /dev/sda9 Apr 12 19:01:51.293018 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Apr 12 19:01:51.152590 systemd[1]: Starting systemd-logind.service... Apr 12 19:01:51.293435 extend-filesystems[1107]: Resized partition /dev/sda9 Apr 12 19:01:51.159975 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 19:01:51.302364 extend-filesystems[1148]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 19:01:51.299109 dbus-daemon[1105]: [system] SELinux support is enabled Apr 12 19:01:51.160095 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Apr 12 19:01:51.318762 jq[1131]: true Apr 12 19:01:51.161027 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 19:01:51.163172 systemd[1]: Starting update-engine.service... Apr 12 19:01:51.319420 tar[1137]: ./ Apr 12 19:01:51.319420 tar[1137]: ./loopback Apr 12 19:01:51.172364 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 19:01:51.184848 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 19:01:51.185199 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 19:01:51.185856 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 19:01:51.186130 systemd[1]: Finished motdgen.service. Apr 12 19:01:51.320949 mkfs.ext4[1145]: mke2fs 1.46.5 (30-Dec-2021) Apr 12 19:01:51.320949 mkfs.ext4[1145]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Apr 12 19:01:51.320949 mkfs.ext4[1145]: Creating filesystem with 262144 4k blocks and 65536 inodes Apr 12 19:01:51.320949 mkfs.ext4[1145]: Filesystem UUID: ee29feac-5619-4c7d-83a3-804af7183d27 Apr 12 19:01:51.320949 mkfs.ext4[1145]: Superblock backups stored on blocks: Apr 12 19:01:51.320949 mkfs.ext4[1145]: 32768, 98304, 163840, 229376 Apr 12 19:01:51.320949 mkfs.ext4[1145]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Apr 12 19:01:51.320949 mkfs.ext4[1145]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Apr 12 19:01:51.320949 mkfs.ext4[1145]: Creating journal (8192 blocks): done Apr 12 19:01:51.320949 mkfs.ext4[1145]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Apr 12 19:01:51.199341 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 19:01:51.321601 jq[1143]: true Apr 12 19:01:51.199670 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 19:01:51.299511 systemd[1]: Started dbus.service. Apr 12 19:01:51.313615 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 19:01:51.313680 systemd[1]: Reached target system-config.target. Apr 12 19:01:51.326021 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 19:01:51.326063 systemd[1]: Reached target user-config.target. Apr 12 19:01:51.330619 dbus-daemon[1105]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1023 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 12 19:01:51.341924 dbus-daemon[1105]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 12 19:01:51.342495 tar[1138]: crictl Apr 12 19:01:51.349537 systemd[1]: Starting systemd-hostnamed.service... Apr 12 19:01:51.364594 umount[1153]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Apr 12 19:01:51.372834 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Apr 12 19:01:51.423774 kernel: loop0: detected capacity change from 0 to 2097152 Apr 12 19:01:51.424020 tar[1139]: linux-amd64/helm Apr 12 19:01:51.427837 extend-filesystems[1148]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 12 19:01:51.427837 extend-filesystems[1148]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 12 19:01:51.427837 extend-filesystems[1148]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Apr 12 19:01:51.478027 extend-filesystems[1107]: Resized filesystem in /dev/sda9 Apr 12 19:01:51.478195 update_engine[1129]: I0412 19:01:51.438647 1129 main.cc:92] Flatcar Update Engine starting Apr 12 19:01:51.478195 update_engine[1129]: I0412 19:01:51.446168 1129 update_check_scheduler.cc:74] Next update check in 4m57s Apr 12 19:01:51.429313 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 19:01:51.429731 systemd[1]: Finished extend-filesystems.service. Apr 12 19:01:51.448488 systemd[1]: Started update-engine.service. Apr 12 19:01:51.469604 systemd[1]: Started locksmithd.service. Apr 12 19:01:51.518091 bash[1175]: Updated "/home/core/.ssh/authorized_keys" Apr 12 19:01:51.518547 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 19:01:51.518821 kernel: EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 19:01:51.580189 coreos-metadata[1104]: Apr 12 19:01:51.580 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Apr 12 19:01:51.585934 coreos-metadata[1104]: Apr 12 19:01:51.583 INFO Fetch failed with 404: resource not found Apr 12 19:01:51.585934 coreos-metadata[1104]: Apr 12 19:01:51.583 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Apr 12 19:01:51.585934 coreos-metadata[1104]: Apr 12 19:01:51.584 INFO Fetch successful Apr 12 19:01:51.585934 coreos-metadata[1104]: Apr 12 19:01:51.584 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Apr 12 19:01:51.586289 coreos-metadata[1104]: Apr 12 19:01:51.586 INFO Fetch failed with 404: resource not found Apr 12 19:01:51.586289 coreos-metadata[1104]: Apr 12 19:01:51.586 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Apr 12 19:01:51.587671 coreos-metadata[1104]: Apr 12 19:01:51.586 INFO Fetch failed with 404: resource not found Apr 12 19:01:51.587671 coreos-metadata[1104]: Apr 12 19:01:51.587 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Apr 12 19:01:51.588114 coreos-metadata[1104]: Apr 12 19:01:51.588 INFO Fetch successful Apr 12 19:01:51.590203 unknown[1104]: wrote ssh authorized keys file for user: core Apr 12 19:01:51.615129 systemd-logind[1127]: Watching system buttons on /dev/input/event1 (Power Button) Apr 12 19:01:51.615175 systemd-logind[1127]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 12 19:01:51.615209 systemd-logind[1127]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 12 19:01:51.617334 env[1144]: time="2024-04-12T19:01:51.617257480Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 19:01:51.634106 systemd-logind[1127]: New seat seat0. Apr 12 19:01:51.647304 systemd[1]: Started systemd-logind.service. Apr 12 19:01:51.673684 update-ssh-keys[1183]: Updated "/home/core/.ssh/authorized_keys" Apr 12 19:01:51.674607 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Apr 12 19:01:51.677987 tar[1137]: ./bandwidth Apr 12 19:01:51.772269 env[1144]: time="2024-04-12T19:01:51.772186890Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 19:01:51.772486 env[1144]: time="2024-04-12T19:01:51.772438992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 19:01:51.824836 env[1144]: time="2024-04-12T19:01:51.824342080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 19:01:51.824836 env[1144]: time="2024-04-12T19:01:51.824408646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 19:01:51.825102 env[1144]: time="2024-04-12T19:01:51.824792712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 19:01:51.825102 env[1144]: time="2024-04-12T19:01:51.824887994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 19:01:51.825102 env[1144]: time="2024-04-12T19:01:51.824914134Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 19:01:51.825102 env[1144]: time="2024-04-12T19:01:51.824931809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 19:01:51.825102 env[1144]: time="2024-04-12T19:01:51.825056618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 19:01:51.826832 env[1144]: time="2024-04-12T19:01:51.825444115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 19:01:51.826832 env[1144]: time="2024-04-12T19:01:51.825712261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 19:01:51.826832 env[1144]: time="2024-04-12T19:01:51.825744424Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 19:01:51.826832 env[1144]: time="2024-04-12T19:01:51.825864750Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 19:01:51.826832 env[1144]: time="2024-04-12T19:01:51.825889724Z" level=info msg="metadata content store policy set" policy=shared Apr 12 19:01:51.843057 env[1144]: time="2024-04-12T19:01:51.842920090Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 19:01:51.843057 env[1144]: time="2024-04-12T19:01:51.843004699Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 19:01:51.843057 env[1144]: time="2024-04-12T19:01:51.843028546Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 19:01:51.843360 env[1144]: time="2024-04-12T19:01:51.843103056Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 19:01:51.843360 env[1144]: time="2024-04-12T19:01:51.843129261Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 19:01:51.843360 env[1144]: time="2024-04-12T19:01:51.843221183Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 19:01:51.843360 env[1144]: time="2024-04-12T19:01:51.843244068Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 19:01:51.843360 env[1144]: time="2024-04-12T19:01:51.843277653Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 19:01:51.843360 env[1144]: time="2024-04-12T19:01:51.843310667Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 19:01:51.843360 env[1144]: time="2024-04-12T19:01:51.843341759Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 19:01:51.843677 env[1144]: time="2024-04-12T19:01:51.843366153Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 19:01:51.843677 env[1144]: time="2024-04-12T19:01:51.843391972Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 19:01:51.843677 env[1144]: time="2024-04-12T19:01:51.843624820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 19:01:51.843866 env[1144]: time="2024-04-12T19:01:51.843776808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 19:01:51.844433 env[1144]: time="2024-04-12T19:01:51.844377991Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 19:01:51.844548 env[1144]: time="2024-04-12T19:01:51.844444479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.844548 env[1144]: time="2024-04-12T19:01:51.844469859Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 19:01:51.844668 env[1144]: time="2024-04-12T19:01:51.844590234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.844723 env[1144]: time="2024-04-12T19:01:51.844700981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.844786 env[1144]: time="2024-04-12T19:01:51.844729721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.844786 env[1144]: time="2024-04-12T19:01:51.844752225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.844917 env[1144]: time="2024-04-12T19:01:51.844824757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.844917 env[1144]: time="2024-04-12T19:01:51.844852502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.844917 env[1144]: time="2024-04-12T19:01:51.844875735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.844917 env[1144]: time="2024-04-12T19:01:51.844899780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.845124 env[1144]: time="2024-04-12T19:01:51.844927559Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 19:01:51.845187 env[1144]: time="2024-04-12T19:01:51.845130442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.845187 env[1144]: time="2024-04-12T19:01:51.845158880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.845278 env[1144]: time="2024-04-12T19:01:51.845196222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.845278 env[1144]: time="2024-04-12T19:01:51.845223003Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 19:01:51.845278 env[1144]: time="2024-04-12T19:01:51.845252956Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 19:01:51.845415 env[1144]: time="2024-04-12T19:01:51.845274211Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 19:01:51.845415 env[1144]: time="2024-04-12T19:01:51.845307129Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 19:01:51.845415 env[1144]: time="2024-04-12T19:01:51.845365085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 19:01:51.845924 env[1144]: time="2024-04-12T19:01:51.845834102Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 19:01:51.850189 env[1144]: time="2024-04-12T19:01:51.845938085Z" level=info msg="Connect containerd service" Apr 12 19:01:51.850189 env[1144]: time="2024-04-12T19:01:51.846003851Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 19:01:51.873777 env[1144]: time="2024-04-12T19:01:51.873708592Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 19:01:51.874332 env[1144]: time="2024-04-12T19:01:51.874288967Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 19:01:51.874483 env[1144]: time="2024-04-12T19:01:51.874384176Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 19:01:51.874599 systemd[1]: Started containerd.service. Apr 12 19:01:51.874956 env[1144]: time="2024-04-12T19:01:51.874920481Z" level=info msg="containerd successfully booted in 0.258930s" Apr 12 19:01:51.883423 dbus-daemon[1105]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 12 19:01:51.883931 systemd[1]: Started systemd-hostnamed.service. Apr 12 19:01:51.888044 dbus-daemon[1105]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1159 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 12 19:01:51.897283 systemd[1]: Starting polkit.service... Apr 12 19:01:51.899470 env[1144]: time="2024-04-12T19:01:51.899374907Z" level=info msg="Start subscribing containerd event" Apr 12 19:01:51.899602 env[1144]: time="2024-04-12T19:01:51.899506339Z" level=info msg="Start recovering state" Apr 12 19:01:51.899658 env[1144]: time="2024-04-12T19:01:51.899624454Z" level=info msg="Start event monitor" Apr 12 19:01:51.899713 env[1144]: time="2024-04-12T19:01:51.899657234Z" level=info msg="Start snapshots syncer" Apr 12 19:01:51.899713 env[1144]: time="2024-04-12T19:01:51.899677305Z" level=info msg="Start cni network conf syncer for default" Apr 12 19:01:51.899713 env[1144]: time="2024-04-12T19:01:51.899691974Z" level=info msg="Start streaming server" Apr 12 19:01:52.013505 polkitd[1185]: Started polkitd version 121 Apr 12 19:01:52.050360 polkitd[1185]: Loading rules from directory /etc/polkit-1/rules.d Apr 12 19:01:52.050467 polkitd[1185]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 12 19:01:52.055242 polkitd[1185]: Finished loading, compiling and executing 2 rules Apr 12 19:01:52.057671 dbus-daemon[1105]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 12 19:01:52.057920 systemd[1]: Started polkit.service. Apr 12 19:01:52.058713 polkitd[1185]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 12 19:01:52.071106 tar[1137]: ./ptp Apr 12 19:01:52.115222 systemd-hostnamed[1159]: Hostname set to (transient) Apr 12 19:01:52.118844 systemd-resolved[1078]: System hostname changed to 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal'. Apr 12 19:01:52.264196 tar[1137]: ./vlan Apr 12 19:01:52.445039 tar[1137]: ./host-device Apr 12 19:01:52.615512 tar[1137]: ./tuning Apr 12 19:01:52.759071 tar[1137]: ./vrf Apr 12 19:01:52.869163 tar[1137]: ./sbr Apr 12 19:01:52.987885 tar[1137]: ./tap Apr 12 19:01:53.016395 tar[1139]: linux-amd64/LICENSE Apr 12 19:01:53.020547 tar[1139]: linux-amd64/README.md Apr 12 19:01:53.037385 systemd[1]: Finished prepare-helm.service. Apr 12 19:01:53.088939 tar[1137]: ./dhcp Apr 12 19:01:53.169358 systemd[1]: Finished prepare-critools.service. Apr 12 19:01:53.358697 tar[1137]: ./static Apr 12 19:01:53.434295 tar[1137]: ./firewall Apr 12 19:01:53.544788 tar[1137]: ./macvlan Apr 12 19:01:53.649157 tar[1137]: ./dummy Apr 12 19:01:53.761651 tar[1137]: ./bridge Apr 12 19:01:53.874286 tar[1137]: ./ipvlan Apr 12 19:01:53.972928 tar[1137]: ./portmap Apr 12 19:01:54.034414 tar[1137]: ./host-local Apr 12 19:01:54.152287 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 19:01:55.586230 sshd_keygen[1147]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 19:01:55.631908 systemd[1]: Finished sshd-keygen.service. Apr 12 19:01:55.641285 systemd[1]: Starting issuegen.service... Apr 12 19:01:55.653555 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 19:01:55.653837 systemd[1]: Finished issuegen.service. Apr 12 19:01:55.663103 systemd[1]: Starting systemd-user-sessions.service... Apr 12 19:01:55.663908 locksmithd[1174]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 19:01:55.673645 systemd[1]: Finished systemd-user-sessions.service. Apr 12 19:01:55.684265 systemd[1]: Started getty@tty1.service. Apr 12 19:01:55.693920 systemd[1]: Started serial-getty@ttyS0.service. Apr 12 19:01:55.702308 systemd[1]: Reached target getty.target. Apr 12 19:01:57.477298 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Apr 12 19:01:59.538852 kernel: loop0: detected capacity change from 0 to 2097152 Apr 12 19:01:59.560569 systemd-nspawn[1216]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Apr 12 19:01:59.560569 systemd-nspawn[1216]: Press ^] three times within 1s to kill container. Apr 12 19:01:59.575891 kernel: EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 19:01:59.658643 systemd[1]: Started oem-gce.service. Apr 12 19:01:59.666541 systemd[1]: Reached target multi-user.target. Apr 12 19:01:59.677470 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 19:01:59.691546 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 19:01:59.691858 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 19:01:59.702280 systemd[1]: Startup finished in 1.130s (kernel) + 9.451s (initrd) + 16.379s (userspace) = 26.961s. Apr 12 19:01:59.758721 systemd-nspawn[1216]: + '[' -e /etc/default/instance_configs.cfg.template ']' Apr 12 19:01:59.758721 systemd-nspawn[1216]: + echo -e '[InstanceSetup]\nset_host_keys = false' Apr 12 19:01:59.759091 systemd-nspawn[1216]: + /usr/bin/google_instance_setup Apr 12 19:02:00.209712 systemd[1]: Created slice system-sshd.slice. Apr 12 19:02:00.212374 systemd[1]: Started sshd@0-10.128.0.35:22-139.178.89.65:56548.service. Apr 12 19:02:00.481315 instance-setup[1222]: INFO Running google_set_multiqueue. Apr 12 19:02:00.507402 instance-setup[1222]: INFO Set channels for eth0 to 2. Apr 12 19:02:00.513156 instance-setup[1222]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Apr 12 19:02:00.515670 instance-setup[1222]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Apr 12 19:02:00.516188 instance-setup[1222]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Apr 12 19:02:00.517531 instance-setup[1222]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Apr 12 19:02:00.517998 instance-setup[1222]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Apr 12 19:02:00.519488 instance-setup[1222]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Apr 12 19:02:00.519852 instance-setup[1222]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Apr 12 19:02:00.521332 instance-setup[1222]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Apr 12 19:02:00.534090 instance-setup[1222]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Apr 12 19:02:00.534287 instance-setup[1222]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Apr 12 19:02:00.584997 sshd[1224]: Accepted publickey for core from 139.178.89.65 port 56548 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:02:00.587499 systemd-nspawn[1216]: + /usr/bin/google_metadata_script_runner --script-type startup Apr 12 19:02:00.589032 sshd[1224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:02:00.610519 systemd[1]: Created slice user-500.slice. Apr 12 19:02:00.615577 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 19:02:00.621942 systemd-logind[1127]: New session 1 of user core. Apr 12 19:02:00.634131 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 19:02:00.636681 systemd[1]: Starting user@500.service... Apr 12 19:02:00.658734 (systemd)[1258]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:02:00.822598 systemd[1258]: Queued start job for default target default.target. Apr 12 19:02:00.824968 systemd[1258]: Reached target paths.target. Apr 12 19:02:00.825255 systemd[1258]: Reached target sockets.target. Apr 12 19:02:00.825443 systemd[1258]: Reached target timers.target. Apr 12 19:02:00.825607 systemd[1258]: Reached target basic.target. Apr 12 19:02:00.825824 systemd[1]: Started user@500.service. Apr 12 19:02:00.827506 systemd[1]: Started session-1.scope. Apr 12 19:02:00.830621 systemd[1258]: Reached target default.target. Apr 12 19:02:00.831160 systemd[1258]: Startup finished in 158ms. Apr 12 19:02:01.026926 startup-script[1256]: INFO Starting startup scripts. Apr 12 19:02:01.047008 startup-script[1256]: INFO No startup scripts found in metadata. Apr 12 19:02:01.047603 startup-script[1256]: INFO Finished running startup scripts. Apr 12 19:02:01.091164 systemd[1]: Started sshd@1-10.128.0.35:22-139.178.89.65:56562.service. Apr 12 19:02:01.123621 systemd-nspawn[1216]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Apr 12 19:02:01.123621 systemd-nspawn[1216]: + daemon_pids=() Apr 12 19:02:01.124972 systemd-nspawn[1216]: + for d in accounts clock_skew network Apr 12 19:02:01.124972 systemd-nspawn[1216]: + daemon_pids+=($!) Apr 12 19:02:01.124972 systemd-nspawn[1216]: + for d in accounts clock_skew network Apr 12 19:02:01.124972 systemd-nspawn[1216]: + /usr/bin/google_accounts_daemon Apr 12 19:02:01.124972 systemd-nspawn[1216]: + daemon_pids+=($!) Apr 12 19:02:01.124972 systemd-nspawn[1216]: + for d in accounts clock_skew network Apr 12 19:02:01.124972 systemd-nspawn[1216]: + daemon_pids+=($!) Apr 12 19:02:01.124972 systemd-nspawn[1216]: + NOTIFY_SOCKET=/run/systemd/notify Apr 12 19:02:01.124972 systemd-nspawn[1216]: + /usr/bin/systemd-notify --ready Apr 12 19:02:01.124972 systemd-nspawn[1216]: + /usr/bin/google_clock_skew_daemon Apr 12 19:02:01.125844 systemd-nspawn[1216]: + /usr/bin/google_network_daemon Apr 12 19:02:01.196609 systemd-nspawn[1216]: + wait -n 36 37 38 Apr 12 19:02:01.478042 sshd[1269]: Accepted publickey for core from 139.178.89.65 port 56562 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:02:01.479726 sshd[1269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:02:01.488107 systemd[1]: Started session-2.scope. Apr 12 19:02:01.489178 systemd-logind[1127]: New session 2 of user core. Apr 12 19:02:01.741095 sshd[1269]: pam_unix(sshd:session): session closed for user core Apr 12 19:02:01.745672 systemd[1]: sshd@1-10.128.0.35:22-139.178.89.65:56562.service: Deactivated successfully. Apr 12 19:02:01.746922 systemd[1]: session-2.scope: Deactivated successfully. Apr 12 19:02:01.748079 systemd-logind[1127]: Session 2 logged out. Waiting for processes to exit. Apr 12 19:02:01.750288 systemd-logind[1127]: Removed session 2. Apr 12 19:02:01.794532 systemd[1]: Started sshd@2-10.128.0.35:22-139.178.89.65:56564.service. Apr 12 19:02:01.888969 google-clock-skew[1272]: INFO Starting Google Clock Skew daemon. Apr 12 19:02:01.893132 google-networking[1273]: INFO Starting Google Networking daemon. Apr 12 19:02:01.895754 groupadd[1288]: group added to /etc/group: name=google-sudoers, GID=1000 Apr 12 19:02:01.899980 groupadd[1288]: group added to /etc/gshadow: name=google-sudoers Apr 12 19:02:01.905135 google-clock-skew[1272]: INFO Clock drift token has changed: 0. Apr 12 19:02:01.909723 groupadd[1288]: new group: name=google-sudoers, GID=1000 Apr 12 19:02:01.914396 systemd-nspawn[1216]: hwclock: Cannot access the Hardware Clock via any known method. Apr 12 19:02:01.914396 systemd-nspawn[1216]: hwclock: Use the --verbose option to see the details of our search for an access method. Apr 12 19:02:01.915888 google-clock-skew[1272]: WARNING Failed to sync system time with hardware clock. Apr 12 19:02:01.932367 google-accounts[1271]: INFO Starting Google Accounts daemon. Apr 12 19:02:01.959317 google-accounts[1271]: WARNING OS Login not installed. Apr 12 19:02:01.960693 google-accounts[1271]: INFO Creating a new user account for 0. Apr 12 19:02:01.967076 systemd-nspawn[1216]: useradd: invalid user name '0': use --badname to ignore Apr 12 19:02:01.967819 google-accounts[1271]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Apr 12 19:02:02.153166 sshd[1286]: Accepted publickey for core from 139.178.89.65 port 56564 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:02:02.156499 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:02:02.164222 systemd[1]: Started session-3.scope. Apr 12 19:02:02.165269 systemd-logind[1127]: New session 3 of user core. Apr 12 19:02:02.398097 sshd[1286]: pam_unix(sshd:session): session closed for user core Apr 12 19:02:02.402934 systemd[1]: sshd@2-10.128.0.35:22-139.178.89.65:56564.service: Deactivated successfully. Apr 12 19:02:02.404137 systemd[1]: session-3.scope: Deactivated successfully. Apr 12 19:02:02.405326 systemd-logind[1127]: Session 3 logged out. Waiting for processes to exit. Apr 12 19:02:02.406670 systemd-logind[1127]: Removed session 3. Apr 12 19:02:02.453661 systemd[1]: Started sshd@3-10.128.0.35:22-139.178.89.65:56570.service. Apr 12 19:02:02.799158 sshd[1304]: Accepted publickey for core from 139.178.89.65 port 56570 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:02:02.801009 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:02:02.807930 systemd-logind[1127]: New session 4 of user core. Apr 12 19:02:02.808528 systemd[1]: Started session-4.scope. Apr 12 19:02:03.050305 sshd[1304]: pam_unix(sshd:session): session closed for user core Apr 12 19:02:03.055955 systemd[1]: sshd@3-10.128.0.35:22-139.178.89.65:56570.service: Deactivated successfully. Apr 12 19:02:03.057232 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 19:02:03.058252 systemd-logind[1127]: Session 4 logged out. Waiting for processes to exit. Apr 12 19:02:03.059650 systemd-logind[1127]: Removed session 4. Apr 12 19:02:03.106654 systemd[1]: Started sshd@4-10.128.0.35:22-139.178.89.65:56572.service. Apr 12 19:02:03.456194 sshd[1310]: Accepted publickey for core from 139.178.89.65 port 56572 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:02:03.458601 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:02:03.465740 systemd[1]: Started session-5.scope. Apr 12 19:02:03.466376 systemd-logind[1127]: New session 5 of user core. Apr 12 19:02:03.685050 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 19:02:03.685457 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 19:02:04.309247 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 19:02:04.319014 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 19:02:04.319605 systemd[1]: Reached target network-online.target. Apr 12 19:02:04.322116 systemd[1]: Starting docker.service... Apr 12 19:02:04.375664 env[1329]: time="2024-04-12T19:02:04.375571182Z" level=info msg="Starting up" Apr 12 19:02:04.379146 env[1329]: time="2024-04-12T19:02:04.379086168Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 19:02:04.379146 env[1329]: time="2024-04-12T19:02:04.379117283Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 19:02:04.379398 env[1329]: time="2024-04-12T19:02:04.379149269Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 19:02:04.379398 env[1329]: time="2024-04-12T19:02:04.379168368Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 19:02:04.382112 env[1329]: time="2024-04-12T19:02:04.382057105Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 19:02:04.382112 env[1329]: time="2024-04-12T19:02:04.382085313Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 19:02:04.382112 env[1329]: time="2024-04-12T19:02:04.382112118Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 19:02:04.382364 env[1329]: time="2024-04-12T19:02:04.382130177Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 19:02:04.422152 env[1329]: time="2024-04-12T19:02:04.422088677Z" level=info msg="Loading containers: start." Apr 12 19:02:04.600857 kernel: Initializing XFRM netlink socket Apr 12 19:02:04.650899 env[1329]: time="2024-04-12T19:02:04.650829746Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 19:02:04.735821 systemd-networkd[1023]: docker0: Link UP Apr 12 19:02:04.753179 env[1329]: time="2024-04-12T19:02:04.753105791Z" level=info msg="Loading containers: done." Apr 12 19:02:04.771066 env[1329]: time="2024-04-12T19:02:04.770987094Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 19:02:04.771373 env[1329]: time="2024-04-12T19:02:04.771332666Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 19:02:04.771551 env[1329]: time="2024-04-12T19:02:04.771511297Z" level=info msg="Daemon has completed initialization" Apr 12 19:02:04.795936 systemd[1]: Started docker.service. Apr 12 19:02:04.808089 env[1329]: time="2024-04-12T19:02:04.807979828Z" level=info msg="API listen on /run/docker.sock" Apr 12 19:02:04.837647 systemd[1]: Reloading. Apr 12 19:02:04.970008 /usr/lib/systemd/system-generators/torcx-generator[1466]: time="2024-04-12T19:02:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 19:02:04.970057 /usr/lib/systemd/system-generators/torcx-generator[1466]: time="2024-04-12T19:02:04Z" level=info msg="torcx already run" Apr 12 19:02:05.065417 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 19:02:05.065449 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 19:02:05.089656 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 19:02:05.248265 systemd[1]: Started kubelet.service. Apr 12 19:02:05.345162 kubelet[1509]: E0412 19:02:05.345061 1509 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 19:02:05.348049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 19:02:05.348272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 19:02:05.893823 env[1144]: time="2024-04-12T19:02:05.893752966Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.8\"" Apr 12 19:02:06.345000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1777212232.mount: Deactivated successfully. Apr 12 19:02:08.476346 env[1144]: time="2024-04-12T19:02:08.476257597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:08.479707 env[1144]: time="2024-04-12T19:02:08.479654680Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e70a71eaa5605454dd0adfd46911b0203db5baf1107de51ba9943d2eaea23142,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:08.486926 env[1144]: time="2024-04-12T19:02:08.486879885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:08.489374 env[1144]: time="2024-04-12T19:02:08.489334150Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:7e7f3c806333528451a1e0bfdf17da0341adaea7d50a703db9c2005c474a97b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:08.490347 env[1144]: time="2024-04-12T19:02:08.490292987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.8\" returns image reference \"sha256:e70a71eaa5605454dd0adfd46911b0203db5baf1107de51ba9943d2eaea23142\"" Apr 12 19:02:08.505522 env[1144]: time="2024-04-12T19:02:08.505476294Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.8\"" Apr 12 19:02:10.525064 env[1144]: time="2024-04-12T19:02:10.524952434Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:10.528353 env[1144]: time="2024-04-12T19:02:10.528295889Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e5ae3e4dc6566b175cc53982cae28703dcd88916c37b4d2c0cb688faf8e05fad,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:10.530917 env[1144]: time="2024-04-12T19:02:10.530871555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:10.533735 env[1144]: time="2024-04-12T19:02:10.533689272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:f3d0e8da9d1532e081e719a985e89a0cfe1a29d127773ad8e2c2fee1dd10fd00,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:10.534729 env[1144]: time="2024-04-12T19:02:10.534668855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.8\" returns image reference \"sha256:e5ae3e4dc6566b175cc53982cae28703dcd88916c37b4d2c0cb688faf8e05fad\"" Apr 12 19:02:10.553641 env[1144]: time="2024-04-12T19:02:10.553567275Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.8\"" Apr 12 19:02:11.870148 env[1144]: time="2024-04-12T19:02:11.870059337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:11.873144 env[1144]: time="2024-04-12T19:02:11.873078614Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ad3260645145d9611fcf5e5936ddf7cf5be8990fe44160c960c2f3cc643fb4e4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:11.876091 env[1144]: time="2024-04-12T19:02:11.876049587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:11.879677 env[1144]: time="2024-04-12T19:02:11.879629818Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:4d61604f259d3c91d8b3ec7a6a999f5eae9aff371567151cd5165eaa698c6d7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:11.880222 env[1144]: time="2024-04-12T19:02:11.880162816Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.8\" returns image reference \"sha256:ad3260645145d9611fcf5e5936ddf7cf5be8990fe44160c960c2f3cc643fb4e4\"" Apr 12 19:02:11.896853 env[1144]: time="2024-04-12T19:02:11.896789900Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.8\"" Apr 12 19:02:12.982608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923286205.mount: Deactivated successfully. Apr 12 19:02:13.667442 env[1144]: time="2024-04-12T19:02:13.667340987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:13.670660 env[1144]: time="2024-04-12T19:02:13.670602909Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5ce97277076c6f5c87d43fec5e3eacad030c82c81b2756d2bba4569d22fc65dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:13.673064 env[1144]: time="2024-04-12T19:02:13.673016323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:13.675423 env[1144]: time="2024-04-12T19:02:13.675372250Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9e9dd46799712c58e1a49f973374ffa9ad4e5a6175896e5d805a8738bf5c5865,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:13.676228 env[1144]: time="2024-04-12T19:02:13.676176487Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.8\" returns image reference \"sha256:5ce97277076c6f5c87d43fec5e3eacad030c82c81b2756d2bba4569d22fc65dc\"" Apr 12 19:02:13.693833 env[1144]: time="2024-04-12T19:02:13.693740937Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 19:02:14.083549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719205296.mount: Deactivated successfully. Apr 12 19:02:14.092536 env[1144]: time="2024-04-12T19:02:14.092447678Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:14.095208 env[1144]: time="2024-04-12T19:02:14.095151856Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:14.098180 env[1144]: time="2024-04-12T19:02:14.098135283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:14.101174 env[1144]: time="2024-04-12T19:02:14.101128948Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:14.101952 env[1144]: time="2024-04-12T19:02:14.101892887Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 12 19:02:14.118850 env[1144]: time="2024-04-12T19:02:14.118762928Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Apr 12 19:02:14.722716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133157931.mount: Deactivated successfully. Apr 12 19:02:15.061349 systemd[1]: Started sshd@5-10.128.0.35:22-119.196.24.240:61814.service. Apr 12 19:02:15.366206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 19:02:15.366579 systemd[1]: Stopped kubelet.service. Apr 12 19:02:15.370958 systemd[1]: Started kubelet.service. Apr 12 19:02:15.486872 kubelet[1557]: E0412 19:02:15.486764 1557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 19:02:15.491854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 19:02:15.492084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 19:02:17.592658 sshd[1555]: Failed password for root from 119.196.24.240 port 61814 ssh2 Apr 12 19:02:17.913031 sshd[1555]: Failed password for root from 119.196.24.240 port 61814 ssh2 Apr 12 19:02:18.219306 sshd[1555]: Failed password for root from 119.196.24.240 port 61814 ssh2 Apr 12 19:02:19.485007 env[1144]: time="2024-04-12T19:02:19.484918995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:19.488430 env[1144]: time="2024-04-12T19:02:19.488377392Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:19.491573 env[1144]: time="2024-04-12T19:02:19.491523975Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:19.494177 env[1144]: time="2024-04-12T19:02:19.494111350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:19.495234 env[1144]: time="2024-04-12T19:02:19.495177403Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Apr 12 19:02:19.510834 env[1144]: time="2024-04-12T19:02:19.510772692Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Apr 12 19:02:19.833713 sshd[1555]: Failed password for root from 119.196.24.240 port 61814 ssh2 Apr 12 19:02:19.914932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964137590.mount: Deactivated successfully. Apr 12 19:02:20.673346 sshd[1555]: Connection reset by authenticating user root 119.196.24.240 port 61814 [preauth] Apr 12 19:02:20.674378 systemd[1]: sshd@5-10.128.0.35:22-119.196.24.240:61814.service: Deactivated successfully. Apr 12 19:02:20.833977 env[1144]: time="2024-04-12T19:02:20.833898032Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:20.837043 env[1144]: time="2024-04-12T19:02:20.836992172Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:20.839435 env[1144]: time="2024-04-12T19:02:20.839393131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:20.841680 env[1144]: time="2024-04-12T19:02:20.841638726Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:20.842488 env[1144]: time="2024-04-12T19:02:20.842435469Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Apr 12 19:02:22.148696 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 12 19:02:24.632528 systemd[1]: Stopped kubelet.service. Apr 12 19:02:24.656985 systemd[1]: Reloading. Apr 12 19:02:24.772994 /usr/lib/systemd/system-generators/torcx-generator[1656]: time="2024-04-12T19:02:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 19:02:24.773042 /usr/lib/systemd/system-generators/torcx-generator[1656]: time="2024-04-12T19:02:24Z" level=info msg="torcx already run" Apr 12 19:02:24.863472 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 19:02:24.863502 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 19:02:24.887165 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 19:02:25.019892 systemd[1]: Started kubelet.service. Apr 12 19:02:25.079891 kubelet[1700]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 19:02:25.079891 kubelet[1700]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 19:02:25.079891 kubelet[1700]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 19:02:25.080506 kubelet[1700]: I0412 19:02:25.079972 1700 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 19:02:25.699061 kubelet[1700]: I0412 19:02:25.699004 1700 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Apr 12 19:02:25.699061 kubelet[1700]: I0412 19:02:25.699046 1700 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 19:02:25.699463 kubelet[1700]: I0412 19:02:25.699435 1700 server.go:895] "Client rotation is on, will bootstrap in background" Apr 12 19:02:25.708700 kubelet[1700]: E0412 19:02:25.708669 1700 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:25.710593 kubelet[1700]: I0412 19:02:25.710367 1700 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 19:02:25.721822 kubelet[1700]: I0412 19:02:25.721751 1700 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 19:02:25.722296 kubelet[1700]: I0412 19:02:25.722261 1700 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 19:02:25.722590 kubelet[1700]: I0412 19:02:25.722546 1700 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 19:02:25.722783 kubelet[1700]: I0412 19:02:25.722596 1700 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 19:02:25.722783 kubelet[1700]: I0412 19:02:25.722614 1700 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 19:02:25.722967 kubelet[1700]: I0412 19:02:25.722842 1700 state_mem.go:36] "Initialized new in-memory state store" Apr 12 19:02:25.723405 kubelet[1700]: I0412 19:02:25.723378 1700 kubelet.go:393] "Attempting to sync node with API server" Apr 12 19:02:25.723517 kubelet[1700]: I0412 19:02:25.723419 1700 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 19:02:25.723517 kubelet[1700]: I0412 19:02:25.723479 1700 kubelet.go:309] "Adding apiserver pod source" Apr 12 19:02:25.723517 kubelet[1700]: I0412 19:02:25.723507 1700 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 19:02:25.725702 kubelet[1700]: W0412 19:02:25.725645 1700 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:25.725896 kubelet[1700]: E0412 19:02:25.725877 1700 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:25.726193 kubelet[1700]: W0412 19:02:25.726153 1700 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:25.726373 kubelet[1700]: E0412 19:02:25.726355 1700 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:25.726856 kubelet[1700]: I0412 19:02:25.726820 1700 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 19:02:25.732719 kubelet[1700]: W0412 19:02:25.732695 1700 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 19:02:25.733636 kubelet[1700]: I0412 19:02:25.733614 1700 server.go:1232] "Started kubelet" Apr 12 19:02:25.735325 kubelet[1700]: I0412 19:02:25.735281 1700 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 19:02:25.745721 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 19:02:25.745867 kubelet[1700]: I0412 19:02:25.735960 1700 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 19:02:25.745867 kubelet[1700]: I0412 19:02:25.736052 1700 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 19:02:25.745867 kubelet[1700]: E0412 19:02:25.745455 1700 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 19:02:25.745867 kubelet[1700]: E0412 19:02:25.745487 1700 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 19:02:25.746099 kubelet[1700]: E0412 19:02:25.745614 1700 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal.17c59da072f8e9a1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal", UID:"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.April, 12, 19, 2, 25, 733585313, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 19, 2, 25, 733585313, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal"}': 'Post "https://10.128.0.35:6443/api/v1/namespaces/default/events": dial tcp 10.128.0.35:6443: connect: connection refused'(may retry after sleeping) Apr 12 19:02:25.746708 kubelet[1700]: I0412 19:02:25.746659 1700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 19:02:25.748790 kubelet[1700]: I0412 19:02:25.748765 1700 server.go:462] "Adding debug handlers to kubelet server" Apr 12 19:02:25.748973 kubelet[1700]: I0412 19:02:25.748949 1700 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 19:02:25.749529 kubelet[1700]: I0412 19:02:25.749498 1700 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 19:02:25.749636 kubelet[1700]: I0412 19:02:25.749602 1700 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 19:02:25.752739 kubelet[1700]: W0412 19:02:25.752672 1700 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:25.752739 kubelet[1700]: E0412 19:02:25.752741 1700 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:25.752961 kubelet[1700]: E0412 19:02:25.752930 1700 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="200ms" Apr 12 19:02:25.814483 kubelet[1700]: I0412 19:02:25.814450 1700 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 19:02:25.814483 kubelet[1700]: I0412 19:02:25.814481 1700 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 19:02:25.814719 kubelet[1700]: I0412 19:02:25.814518 1700 state_mem.go:36] "Initialized new in-memory state store" Apr 12 19:02:25.818291 kubelet[1700]: I0412 19:02:25.818265 1700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 19:02:25.820165 kubelet[1700]: I0412 19:02:25.820126 1700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 19:02:25.820165 kubelet[1700]: I0412 19:02:25.820161 1700 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 19:02:25.820343 kubelet[1700]: I0412 19:02:25.820186 1700 kubelet.go:2303] "Starting kubelet main sync loop" Apr 12 19:02:25.820343 kubelet[1700]: E0412 19:02:25.820269 1700 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 19:02:25.821972 kubelet[1700]: W0412 19:02:25.821940 1700 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.128.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:25.822166 kubelet[1700]: E0412 19:02:25.822149 1700 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:25.857751 kubelet[1700]: I0412 19:02:25.857489 1700 policy_none.go:49] "None policy: Start" Apr 12 19:02:25.858305 kubelet[1700]: I0412 19:02:25.858247 1700 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:25.859180 kubelet[1700]: I0412 19:02:25.859143 1700 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 19:02:25.859180 kubelet[1700]: I0412 19:02:25.859178 1700 state_mem.go:35] "Initializing new in-memory state store" Apr 12 19:02:25.859581 kubelet[1700]: E0412 19:02:25.859537 1700 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:25.876689 systemd[1]: Created slice kubepods.slice. Apr 12 19:02:25.884207 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 19:02:25.889731 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 19:02:25.899869 kubelet[1700]: I0412 19:02:25.899823 1700 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 19:02:25.900864 kubelet[1700]: I0412 19:02:25.900842 1700 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 19:02:25.903050 kubelet[1700]: E0412 19:02:25.903021 1700 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" not found" Apr 12 19:02:25.921424 kubelet[1700]: I0412 19:02:25.921368 1700 topology_manager.go:215] "Topology Admit Handler" podUID="25fdb341cfe2212fe583a9e0a9ec6e78" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:25.941558 kubelet[1700]: I0412 19:02:25.941509 1700 topology_manager.go:215] "Topology Admit Handler" podUID="99cc92145fb2b2e39e67f45e0af5d5aa" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:25.945075 kubelet[1700]: I0412 19:02:25.945028 1700 topology_manager.go:215] "Topology Admit Handler" podUID="bbe3be810be868e2fa78b1dd507c21f9" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:25.952114 systemd[1]: Created slice kubepods-burstable-pod25fdb341cfe2212fe583a9e0a9ec6e78.slice. Apr 12 19:02:25.956049 kubelet[1700]: E0412 19:02:25.955180 1700 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="400ms" Apr 12 19:02:25.968945 systemd[1]: Created slice kubepods-burstable-pod99cc92145fb2b2e39e67f45e0af5d5aa.slice. Apr 12 19:02:25.981514 systemd[1]: Created slice kubepods-burstable-podbbe3be810be868e2fa78b1dd507c21f9.slice. Apr 12 19:02:26.050914 kubelet[1700]: I0412 19:02:26.050838 1700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25fdb341cfe2212fe583a9e0a9ec6e78-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"25fdb341cfe2212fe583a9e0a9ec6e78\") " pod="kube-system/kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.050914 kubelet[1700]: I0412 19:02:26.050910 1700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99cc92145fb2b2e39e67f45e0af5d5aa-ca-certs\") pod \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"99cc92145fb2b2e39e67f45e0af5d5aa\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.051845 kubelet[1700]: I0412 19:02:26.050970 1700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/99cc92145fb2b2e39e67f45e0af5d5aa-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"99cc92145fb2b2e39e67f45e0af5d5aa\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.051845 kubelet[1700]: I0412 19:02:26.051015 1700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99cc92145fb2b2e39e67f45e0af5d5aa-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"99cc92145fb2b2e39e67f45e0af5d5aa\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.051845 kubelet[1700]: I0412 19:02:26.051102 1700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99cc92145fb2b2e39e67f45e0af5d5aa-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"99cc92145fb2b2e39e67f45e0af5d5aa\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.051845 kubelet[1700]: I0412 19:02:26.051141 1700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bbe3be810be868e2fa78b1dd507c21f9-kubeconfig\") pod \"kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"bbe3be810be868e2fa78b1dd507c21f9\") " pod="kube-system/kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.052009 kubelet[1700]: I0412 19:02:26.051178 1700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25fdb341cfe2212fe583a9e0a9ec6e78-k8s-certs\") pod \"kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"25fdb341cfe2212fe583a9e0a9ec6e78\") " pod="kube-system/kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.052009 kubelet[1700]: I0412 19:02:26.051217 1700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99cc92145fb2b2e39e67f45e0af5d5aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"99cc92145fb2b2e39e67f45e0af5d5aa\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.052009 kubelet[1700]: I0412 19:02:26.051256 1700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25fdb341cfe2212fe583a9e0a9ec6e78-ca-certs\") pod \"kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"25fdb341cfe2212fe583a9e0a9ec6e78\") " pod="kube-system/kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.066026 kubelet[1700]: I0412 19:02:26.065983 1700 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.066388 kubelet[1700]: E0412 19:02:26.066362 1700 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.264131 env[1144]: time="2024-04-12T19:02:26.263711506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal,Uid:25fdb341cfe2212fe583a9e0a9ec6e78,Namespace:kube-system,Attempt:0,}" Apr 12 19:02:26.274503 env[1144]: time="2024-04-12T19:02:26.274441761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal,Uid:99cc92145fb2b2e39e67f45e0af5d5aa,Namespace:kube-system,Attempt:0,}" Apr 12 19:02:26.285646 env[1144]: time="2024-04-12T19:02:26.285599430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal,Uid:bbe3be810be868e2fa78b1dd507c21f9,Namespace:kube-system,Attempt:0,}" Apr 12 19:02:26.355904 kubelet[1700]: E0412 19:02:26.355853 1700 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="800ms" Apr 12 19:02:26.471454 kubelet[1700]: I0412 19:02:26.471410 1700 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.471853 kubelet[1700]: E0412 19:02:26.471816 1700 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:26.530881 kubelet[1700]: W0412 19:02:26.530694 1700 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:26.530881 kubelet[1700]: E0412 19:02:26.530783 1700 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:26.758621 kubelet[1700]: W0412 19:02:26.758499 1700 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:26.758621 kubelet[1700]: E0412 19:02:26.758625 1700 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:27.079179 kubelet[1700]: W0412 19:02:27.079109 1700 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.128.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:27.079179 kubelet[1700]: E0412 19:02:27.079174 1700 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:27.130595 kubelet[1700]: W0412 19:02:27.086729 1700 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:27.130595 kubelet[1700]: E0412 19:02:27.086836 1700 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:27.157059 kubelet[1700]: E0412 19:02:27.156977 1700 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="1.6s" Apr 12 19:02:27.279554 kubelet[1700]: I0412 19:02:27.279501 1700 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:27.280199 kubelet[1700]: E0412 19:02:27.280128 1700 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:27.888240 kubelet[1700]: E0412 19:02:27.888183 1700 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:28.333723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403946456.mount: Deactivated successfully. Apr 12 19:02:28.345524 env[1144]: time="2024-04-12T19:02:28.345452781Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.347358 env[1144]: time="2024-04-12T19:02:28.347311450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.352473 env[1144]: time="2024-04-12T19:02:28.352415844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.353925 env[1144]: time="2024-04-12T19:02:28.353876618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.355695 env[1144]: time="2024-04-12T19:02:28.355614302Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.357633 env[1144]: time="2024-04-12T19:02:28.357582567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.358641 env[1144]: time="2024-04-12T19:02:28.358582168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.359584 env[1144]: time="2024-04-12T19:02:28.359547608Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.363839 env[1144]: time="2024-04-12T19:02:28.363777760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.365990 env[1144]: time="2024-04-12T19:02:28.365940368Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.366845 env[1144]: time="2024-04-12T19:02:28.366780040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.373131 env[1144]: time="2024-04-12T19:02:28.373068263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:02:28.425790 env[1144]: time="2024-04-12T19:02:28.425670969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 19:02:28.425790 env[1144]: time="2024-04-12T19:02:28.425745631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 19:02:28.426244 env[1144]: time="2024-04-12T19:02:28.425766875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 19:02:28.426648 env[1144]: time="2024-04-12T19:02:28.426574418Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/361bcd1d2913c85bb5af9bf615002e409d4e1d981b8feeefe06a8c3da89f4149 pid=1748 runtime=io.containerd.runc.v2 Apr 12 19:02:28.432253 env[1144]: time="2024-04-12T19:02:28.432154820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 19:02:28.432421 env[1144]: time="2024-04-12T19:02:28.432273209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 19:02:28.432421 env[1144]: time="2024-04-12T19:02:28.432320211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 19:02:28.432686 env[1144]: time="2024-04-12T19:02:28.432573477Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/642823cf5f6efafe75973940d6401311dd03ef76109a48c07612b7be71706b8b pid=1749 runtime=io.containerd.runc.v2 Apr 12 19:02:28.451020 env[1144]: time="2024-04-12T19:02:28.450518134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 19:02:28.451020 env[1144]: time="2024-04-12T19:02:28.450606640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 19:02:28.451020 env[1144]: time="2024-04-12T19:02:28.450628367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 19:02:28.451420 env[1144]: time="2024-04-12T19:02:28.451105340Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a404e3f3f80bc2d3f520150b747ec4c3682b14cb0d0e65ddb722c8d4b9156c6 pid=1775 runtime=io.containerd.runc.v2 Apr 12 19:02:28.462324 systemd[1]: Started cri-containerd-361bcd1d2913c85bb5af9bf615002e409d4e1d981b8feeefe06a8c3da89f4149.scope. Apr 12 19:02:28.529082 systemd[1]: Started cri-containerd-4a404e3f3f80bc2d3f520150b747ec4c3682b14cb0d0e65ddb722c8d4b9156c6.scope. Apr 12 19:02:28.530914 systemd[1]: Started cri-containerd-642823cf5f6efafe75973940d6401311dd03ef76109a48c07612b7be71706b8b.scope. Apr 12 19:02:28.614853 env[1144]: time="2024-04-12T19:02:28.611317676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal,Uid:99cc92145fb2b2e39e67f45e0af5d5aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"361bcd1d2913c85bb5af9bf615002e409d4e1d981b8feeefe06a8c3da89f4149\"" Apr 12 19:02:28.620657 kubelet[1700]: E0412 19:02:28.620063 1700 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flat" Apr 12 19:02:28.624374 env[1144]: time="2024-04-12T19:02:28.624315111Z" level=info msg="CreateContainer within sandbox \"361bcd1d2913c85bb5af9bf615002e409d4e1d981b8feeefe06a8c3da89f4149\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 19:02:28.654172 env[1144]: time="2024-04-12T19:02:28.654097210Z" level=info msg="CreateContainer within sandbox \"361bcd1d2913c85bb5af9bf615002e409d4e1d981b8feeefe06a8c3da89f4149\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf3aea9f6ace8ac1f7ce52a000af6f13952dce7b090ea49d94e1e65c07f35c42\"" Apr 12 19:02:28.655339 env[1144]: time="2024-04-12T19:02:28.655288149Z" level=info msg="StartContainer for \"bf3aea9f6ace8ac1f7ce52a000af6f13952dce7b090ea49d94e1e65c07f35c42\"" Apr 12 19:02:28.681488 env[1144]: time="2024-04-12T19:02:28.679701471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal,Uid:25fdb341cfe2212fe583a9e0a9ec6e78,Namespace:kube-system,Attempt:0,} returns sandbox id \"642823cf5f6efafe75973940d6401311dd03ef76109a48c07612b7be71706b8b\"" Apr 12 19:02:28.682586 kubelet[1700]: E0412 19:02:28.682549 1700 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-21291" Apr 12 19:02:28.685202 env[1144]: time="2024-04-12T19:02:28.685146454Z" level=info msg="CreateContainer within sandbox \"642823cf5f6efafe75973940d6401311dd03ef76109a48c07612b7be71706b8b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 19:02:28.688487 env[1144]: time="2024-04-12T19:02:28.688436651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal,Uid:bbe3be810be868e2fa78b1dd507c21f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a404e3f3f80bc2d3f520150b747ec4c3682b14cb0d0e65ddb722c8d4b9156c6\"" Apr 12 19:02:28.689558 kubelet[1700]: E0412 19:02:28.689371 1700 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal.17c59da072f8e9a1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal", UID:"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.April, 12, 19, 2, 25, 733585313, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 19, 2, 25, 733585313, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal"}': 'Post "https://10.128.0.35:6443/api/v1/namespaces/default/events": dial tcp 10.128.0.35:6443: connect: connection refused'(may retry after sleeping) Apr 12 19:02:28.692013 kubelet[1700]: E0412 19:02:28.691977 1700 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-21291" Apr 12 19:02:28.696326 env[1144]: time="2024-04-12T19:02:28.696265810Z" level=info msg="CreateContainer within sandbox \"4a404e3f3f80bc2d3f520150b747ec4c3682b14cb0d0e65ddb722c8d4b9156c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 19:02:28.708040 systemd[1]: Started cri-containerd-bf3aea9f6ace8ac1f7ce52a000af6f13952dce7b090ea49d94e1e65c07f35c42.scope. Apr 12 19:02:28.737759 env[1144]: time="2024-04-12T19:02:28.737643403Z" level=info msg="CreateContainer within sandbox \"4a404e3f3f80bc2d3f520150b747ec4c3682b14cb0d0e65ddb722c8d4b9156c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"12d4e2967ec1d9db5a2289b74678979a0906ec03a50149e0d0be5db6f09eb21e\"" Apr 12 19:02:28.739080 env[1144]: time="2024-04-12T19:02:28.739036616Z" level=info msg="StartContainer for \"12d4e2967ec1d9db5a2289b74678979a0906ec03a50149e0d0be5db6f09eb21e\"" Apr 12 19:02:28.740252 env[1144]: time="2024-04-12T19:02:28.740202268Z" level=info msg="CreateContainer within sandbox \"642823cf5f6efafe75973940d6401311dd03ef76109a48c07612b7be71706b8b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bab9499ab8c64c01f2fcfe2c78babc45bff7ed18d731ff4433adb66573dacd04\"" Apr 12 19:02:28.742204 env[1144]: time="2024-04-12T19:02:28.742162363Z" level=info msg="StartContainer for \"bab9499ab8c64c01f2fcfe2c78babc45bff7ed18d731ff4433adb66573dacd04\"" Apr 12 19:02:28.757792 kubelet[1700]: E0412 19:02:28.757722 1700 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="3.2s" Apr 12 19:02:28.777967 systemd[1]: Started cri-containerd-bab9499ab8c64c01f2fcfe2c78babc45bff7ed18d731ff4433adb66573dacd04.scope. Apr 12 19:02:28.805996 systemd[1]: Started cri-containerd-12d4e2967ec1d9db5a2289b74678979a0906ec03a50149e0d0be5db6f09eb21e.scope. Apr 12 19:02:28.888386 kubelet[1700]: I0412 19:02:28.888339 1700 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:28.888999 kubelet[1700]: E0412 19:02:28.888792 1700 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:28.891301 env[1144]: time="2024-04-12T19:02:28.891241030Z" level=info msg="StartContainer for \"bf3aea9f6ace8ac1f7ce52a000af6f13952dce7b090ea49d94e1e65c07f35c42\" returns successfully" Apr 12 19:02:28.936228 env[1144]: time="2024-04-12T19:02:28.936173424Z" level=info msg="StartContainer for \"bab9499ab8c64c01f2fcfe2c78babc45bff7ed18d731ff4433adb66573dacd04\" returns successfully" Apr 12 19:02:28.938366 env[1144]: time="2024-04-12T19:02:28.938320160Z" level=info msg="StartContainer for \"12d4e2967ec1d9db5a2289b74678979a0906ec03a50149e0d0be5db6f09eb21e\" returns successfully" Apr 12 19:02:29.153477 kubelet[1700]: W0412 19:02:29.153297 1700 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:29.153477 kubelet[1700]: E0412 19:02:29.153391 1700 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Apr 12 19:02:32.121284 kubelet[1700]: I0412 19:02:32.121253 1700 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:33.080383 kubelet[1700]: E0412 19:02:33.080333 1700 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" not found" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:33.183618 kubelet[1700]: I0412 19:02:33.183564 1700 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:33.447340 kubelet[1700]: E0412 19:02:33.447292 1700 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:33.731717 kubelet[1700]: I0412 19:02:33.731574 1700 apiserver.go:52] "Watching apiserver" Apr 12 19:02:33.749701 kubelet[1700]: I0412 19:02:33.749662 1700 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 19:02:35.633779 systemd[1]: Reloading. Apr 12 19:02:35.772931 /usr/lib/systemd/system-generators/torcx-generator[1990]: time="2024-04-12T19:02:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 19:02:35.772996 /usr/lib/systemd/system-generators/torcx-generator[1990]: time="2024-04-12T19:02:35Z" level=info msg="torcx already run" Apr 12 19:02:35.875231 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 19:02:35.875262 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 19:02:35.901473 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 19:02:36.073923 systemd[1]: Stopping kubelet.service... Apr 12 19:02:36.074667 kubelet[1700]: I0412 19:02:36.074621 1700 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 19:02:36.094981 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 19:02:36.095392 systemd[1]: Stopped kubelet.service. Apr 12 19:02:36.095506 systemd[1]: kubelet.service: Consumed 1.176s CPU time. Apr 12 19:02:36.099027 systemd[1]: Started kubelet.service. Apr 12 19:02:36.224566 kubelet[2033]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 19:02:36.224566 kubelet[2033]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 19:02:36.224566 kubelet[2033]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 19:02:36.225259 kubelet[2033]: I0412 19:02:36.224598 2033 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 19:02:36.228313 sudo[2045]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 19:02:36.229349 sudo[2045]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 19:02:36.235616 kubelet[2033]: I0412 19:02:36.235580 2033 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Apr 12 19:02:36.235880 kubelet[2033]: I0412 19:02:36.235849 2033 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 19:02:36.236464 kubelet[2033]: I0412 19:02:36.236441 2033 server.go:895] "Client rotation is on, will bootstrap in background" Apr 12 19:02:36.240338 kubelet[2033]: I0412 19:02:36.240289 2033 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 19:02:36.242943 kubelet[2033]: I0412 19:02:36.242922 2033 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 19:02:36.255594 kubelet[2033]: I0412 19:02:36.255565 2033 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 19:02:36.256162 kubelet[2033]: I0412 19:02:36.256143 2033 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 19:02:36.256915 kubelet[2033]: I0412 19:02:36.256878 2033 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 19:02:36.257208 kubelet[2033]: I0412 19:02:36.257191 2033 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 19:02:36.257332 kubelet[2033]: I0412 19:02:36.257320 2033 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 19:02:36.257657 kubelet[2033]: I0412 19:02:36.257587 2033 state_mem.go:36] "Initialized new in-memory state store" Apr 12 19:02:36.257960 kubelet[2033]: I0412 19:02:36.257943 2033 kubelet.go:393] "Attempting to sync node with API server" Apr 12 19:02:36.258109 kubelet[2033]: I0412 19:02:36.258096 2033 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 19:02:36.258272 kubelet[2033]: I0412 19:02:36.258258 2033 kubelet.go:309] "Adding apiserver pod source" Apr 12 19:02:36.258415 kubelet[2033]: I0412 19:02:36.258400 2033 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 19:02:36.262270 kubelet[2033]: I0412 19:02:36.262239 2033 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 19:02:36.304630 kubelet[2033]: I0412 19:02:36.304583 2033 server.go:1232] "Started kubelet" Apr 12 19:02:36.312506 kubelet[2033]: I0412 19:02:36.312471 2033 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 19:02:36.317980 kubelet[2033]: I0412 19:02:36.317941 2033 server.go:462] "Adding debug handlers to kubelet server" Apr 12 19:02:36.320185 kubelet[2033]: I0412 19:02:36.320150 2033 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 19:02:36.320458 kubelet[2033]: I0412 19:02:36.320434 2033 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 19:02:36.323253 kubelet[2033]: I0412 19:02:36.323229 2033 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 19:02:36.333826 kubelet[2033]: E0412 19:02:36.324795 2033 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 19:02:36.333826 kubelet[2033]: E0412 19:02:36.324849 2033 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 19:02:36.348045 kubelet[2033]: I0412 19:02:36.348005 2033 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 19:02:36.348908 kubelet[2033]: I0412 19:02:36.348883 2033 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 19:02:36.349290 kubelet[2033]: I0412 19:02:36.349272 2033 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 19:02:36.399927 kubelet[2033]: I0412 19:02:36.399870 2033 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 19:02:36.408931 kubelet[2033]: I0412 19:02:36.403148 2033 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 19:02:36.408931 kubelet[2033]: I0412 19:02:36.404102 2033 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 19:02:36.408931 kubelet[2033]: I0412 19:02:36.404133 2033 kubelet.go:2303] "Starting kubelet main sync loop" Apr 12 19:02:36.408931 kubelet[2033]: E0412 19:02:36.404260 2033 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 19:02:36.475128 kubelet[2033]: I0412 19:02:36.474975 2033 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.504827 kubelet[2033]: E0412 19:02:36.504771 2033 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 12 19:02:36.520846 kubelet[2033]: I0412 19:02:36.519125 2033 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.520846 kubelet[2033]: I0412 19:02:36.519284 2033 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.578828 kubelet[2033]: I0412 19:02:36.578770 2033 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 19:02:36.579115 kubelet[2033]: I0412 19:02:36.579096 2033 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 19:02:36.579240 kubelet[2033]: I0412 19:02:36.579226 2033 state_mem.go:36] "Initialized new in-memory state store" Apr 12 19:02:36.579581 kubelet[2033]: I0412 19:02:36.579565 2033 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 19:02:36.579790 kubelet[2033]: I0412 19:02:36.579774 2033 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 12 19:02:36.579920 kubelet[2033]: I0412 19:02:36.579906 2033 policy_none.go:49] "None policy: Start" Apr 12 19:02:36.588468 kubelet[2033]: I0412 19:02:36.588429 2033 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 19:02:36.588775 kubelet[2033]: I0412 19:02:36.588754 2033 state_mem.go:35] "Initializing new in-memory state store" Apr 12 19:02:36.589195 kubelet[2033]: I0412 19:02:36.589178 2033 state_mem.go:75] "Updated machine memory state" Apr 12 19:02:36.601858 kubelet[2033]: I0412 19:02:36.601821 2033 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 19:02:36.610237 kubelet[2033]: I0412 19:02:36.610201 2033 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 19:02:36.705726 kubelet[2033]: I0412 19:02:36.705658 2033 topology_manager.go:215] "Topology Admit Handler" podUID="99cc92145fb2b2e39e67f45e0af5d5aa" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.706317 kubelet[2033]: I0412 19:02:36.706296 2033 topology_manager.go:215] "Topology Admit Handler" podUID="bbe3be810be868e2fa78b1dd507c21f9" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.707557 kubelet[2033]: I0412 19:02:36.707535 2033 topology_manager.go:215] "Topology Admit Handler" podUID="25fdb341cfe2212fe583a9e0a9ec6e78" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.724256 kubelet[2033]: W0412 19:02:36.724213 2033 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 12 19:02:36.727278 kubelet[2033]: W0412 19:02:36.727139 2033 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 12 19:02:36.733354 kubelet[2033]: W0412 19:02:36.733319 2033 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 12 19:02:36.741946 update_engine[1129]: I0412 19:02:36.741272 1129 update_attempter.cc:509] Updating boot flags... Apr 12 19:02:36.768158 kubelet[2033]: I0412 19:02:36.768114 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/99cc92145fb2b2e39e67f45e0af5d5aa-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"99cc92145fb2b2e39e67f45e0af5d5aa\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.768517 kubelet[2033]: I0412 19:02:36.768488 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99cc92145fb2b2e39e67f45e0af5d5aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"99cc92145fb2b2e39e67f45e0af5d5aa\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.768728 kubelet[2033]: I0412 19:02:36.768702 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25fdb341cfe2212fe583a9e0a9ec6e78-ca-certs\") pod \"kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"25fdb341cfe2212fe583a9e0a9ec6e78\") " pod="kube-system/kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.768962 kubelet[2033]: I0412 19:02:36.768937 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99cc92145fb2b2e39e67f45e0af5d5aa-ca-certs\") pod \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"99cc92145fb2b2e39e67f45e0af5d5aa\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.770040 kubelet[2033]: I0412 19:02:36.769979 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99cc92145fb2b2e39e67f45e0af5d5aa-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"99cc92145fb2b2e39e67f45e0af5d5aa\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.770325 kubelet[2033]: I0412 19:02:36.770304 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99cc92145fb2b2e39e67f45e0af5d5aa-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"99cc92145fb2b2e39e67f45e0af5d5aa\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.770539 kubelet[2033]: I0412 19:02:36.770515 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bbe3be810be868e2fa78b1dd507c21f9-kubeconfig\") pod \"kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"bbe3be810be868e2fa78b1dd507c21f9\") " pod="kube-system/kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.770758 kubelet[2033]: I0412 19:02:36.770733 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25fdb341cfe2212fe583a9e0a9ec6e78-k8s-certs\") pod \"kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"25fdb341cfe2212fe583a9e0a9ec6e78\") " pod="kube-system/kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:36.771062 kubelet[2033]: I0412 19:02:36.771044 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25fdb341cfe2212fe583a9e0a9ec6e78-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" (UID: \"25fdb341cfe2212fe583a9e0a9ec6e78\") " pod="kube-system/kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:37.279424 kubelet[2033]: I0412 19:02:37.277244 2033 apiserver.go:52] "Watching apiserver" Apr 12 19:02:37.349988 kubelet[2033]: I0412 19:02:37.349935 2033 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 19:02:37.370357 sudo[2045]: pam_unix(sudo:session): session closed for user root Apr 12 19:02:37.440264 kubelet[2033]: W0412 19:02:37.440217 2033 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 12 19:02:37.440658 kubelet[2033]: E0412 19:02:37.440633 2033 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:37.441435 kubelet[2033]: W0412 19:02:37.441394 2033 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 12 19:02:37.441670 kubelet[2033]: E0412 19:02:37.441651 2033 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" Apr 12 19:02:37.470988 kubelet[2033]: I0412 19:02:37.470936 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" podStartSLOduration=1.47084447 podCreationTimestamp="2024-04-12 19:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 19:02:37.461044691 +0000 UTC m=+1.340880143" watchObservedRunningTime="2024-04-12 19:02:37.47084447 +0000 UTC m=+1.350679910" Apr 12 19:02:37.482976 kubelet[2033]: I0412 19:02:37.482934 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" podStartSLOduration=1.4828852829999999 podCreationTimestamp="2024-04-12 19:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 19:02:37.471748726 +0000 UTC m=+1.351584176" watchObservedRunningTime="2024-04-12 19:02:37.482885283 +0000 UTC m=+1.362720733" Apr 12 19:02:37.497434 kubelet[2033]: I0412 19:02:37.497394 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" podStartSLOduration=1.497346628 podCreationTimestamp="2024-04-12 19:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 19:02:37.483963414 +0000 UTC m=+1.363798863" watchObservedRunningTime="2024-04-12 19:02:37.497346628 +0000 UTC m=+1.377182077" Apr 12 19:02:38.931447 sudo[1313]: pam_unix(sudo:session): session closed for user root Apr 12 19:02:38.984084 sshd[1310]: pam_unix(sshd:session): session closed for user core Apr 12 19:02:38.989112 systemd[1]: sshd@4-10.128.0.35:22-139.178.89.65:56572.service: Deactivated successfully. Apr 12 19:02:38.990544 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 19:02:38.990846 systemd[1]: session-5.scope: Consumed 6.290s CPU time. Apr 12 19:02:38.991727 systemd-logind[1127]: Session 5 logged out. Waiting for processes to exit. Apr 12 19:02:38.993269 systemd-logind[1127]: Removed session 5. Apr 12 19:02:50.593282 kubelet[2033]: I0412 19:02:50.592911 2033 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 19:02:50.594532 env[1144]: time="2024-04-12T19:02:50.594478018Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 19:02:50.595924 kubelet[2033]: I0412 19:02:50.595536 2033 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 19:02:50.657200 kubelet[2033]: I0412 19:02:50.657147 2033 topology_manager.go:215] "Topology Admit Handler" podUID="b9f4357e-ef50-45e5-a473-b6d66a14c187" podNamespace="kube-system" podName="cilium-8x8ls" Apr 12 19:02:50.666574 systemd[1]: Created slice kubepods-burstable-podb9f4357e_ef50_45e5_a473_b6d66a14c187.slice. Apr 12 19:02:50.682964 kubelet[2033]: W0412 19:02:50.682913 2033 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:02:50.683333 kubelet[2033]: E0412 19:02:50.683295 2033 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:02:50.683820 kubelet[2033]: W0412 19:02:50.683772 2033 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:02:50.684023 kubelet[2033]: E0412 19:02:50.684000 2033 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:02:50.684698 kubelet[2033]: W0412 19:02:50.684667 2033 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:02:50.684902 kubelet[2033]: E0412 19:02:50.684884 2033 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:02:50.685576 kubelet[2033]: W0412 19:02:50.685534 2033 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:02:50.685750 kubelet[2033]: E0412 19:02:50.685732 2033 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:02:50.687533 kubelet[2033]: I0412 19:02:50.687507 2033 topology_manager.go:215] "Topology Admit Handler" podUID="2df469f8-7107-49ef-9f32-c79221b6243c" podNamespace="kube-system" podName="kube-proxy-tjkzf" Apr 12 19:02:50.696424 systemd[1]: Created slice kubepods-besteffort-pod2df469f8_7107_49ef_9f32_c79221b6243c.slice. Apr 12 19:02:50.769220 kubelet[2033]: I0412 19:02:50.769165 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2df469f8-7107-49ef-9f32-c79221b6243c-lib-modules\") pod \"kube-proxy-tjkzf\" (UID: \"2df469f8-7107-49ef-9f32-c79221b6243c\") " pod="kube-system/kube-proxy-tjkzf" Apr 12 19:02:50.769780 kubelet[2033]: I0412 19:02:50.769688 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrmzw\" (UniqueName: \"kubernetes.io/projected/b9f4357e-ef50-45e5-a473-b6d66a14c187-kube-api-access-wrmzw\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.770109 kubelet[2033]: I0412 19:02:50.770087 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-lib-modules\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.770350 kubelet[2033]: I0412 19:02:50.770288 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2df469f8-7107-49ef-9f32-c79221b6243c-xtables-lock\") pod \"kube-proxy-tjkzf\" (UID: \"2df469f8-7107-49ef-9f32-c79221b6243c\") " pod="kube-system/kube-proxy-tjkzf" Apr 12 19:02:50.770566 kubelet[2033]: I0412 19:02:50.770539 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2df469f8-7107-49ef-9f32-c79221b6243c-kube-proxy\") pod \"kube-proxy-tjkzf\" (UID: \"2df469f8-7107-49ef-9f32-c79221b6243c\") " pod="kube-system/kube-proxy-tjkzf" Apr 12 19:02:50.770962 kubelet[2033]: I0412 19:02:50.770938 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrffl\" (UniqueName: \"kubernetes.io/projected/2df469f8-7107-49ef-9f32-c79221b6243c-kube-api-access-zrffl\") pod \"kube-proxy-tjkzf\" (UID: \"2df469f8-7107-49ef-9f32-c79221b6243c\") " pod="kube-system/kube-proxy-tjkzf" Apr 12 19:02:50.771333 kubelet[2033]: I0412 19:02:50.771299 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-run\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.771552 kubelet[2033]: I0412 19:02:50.771504 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-host-proc-sys-net\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.771786 kubelet[2033]: I0412 19:02:50.771736 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-xtables-lock\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.772068 kubelet[2033]: I0412 19:02:50.772049 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-config-path\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.772324 kubelet[2033]: I0412 19:02:50.772265 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-host-proc-sys-kernel\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.772563 kubelet[2033]: I0412 19:02:50.772506 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9f4357e-ef50-45e5-a473-b6d66a14c187-hubble-tls\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.772763 kubelet[2033]: I0412 19:02:50.772737 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cni-path\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.772968 kubelet[2033]: I0412 19:02:50.772942 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9f4357e-ef50-45e5-a473-b6d66a14c187-clustermesh-secrets\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.773122 kubelet[2033]: I0412 19:02:50.773107 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-bpf-maps\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.773315 kubelet[2033]: I0412 19:02:50.773288 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-hostproc\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.773459 kubelet[2033]: I0412 19:02:50.773434 2033 topology_manager.go:215] "Topology Admit Handler" podUID="8797858e-d6f0-44ea-b2e7-23aee40ef301" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-x49px" Apr 12 19:02:50.774172 kubelet[2033]: I0412 19:02:50.773440 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-cgroup\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.774407 kubelet[2033]: I0412 19:02:50.774374 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-etc-cni-netd\") pod \"cilium-8x8ls\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " pod="kube-system/cilium-8x8ls" Apr 12 19:02:50.781488 systemd[1]: Created slice kubepods-besteffort-pod8797858e_d6f0_44ea_b2e7_23aee40ef301.slice. Apr 12 19:02:50.877062 kubelet[2033]: I0412 19:02:50.877017 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8797858e-d6f0-44ea-b2e7-23aee40ef301-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-x49px\" (UID: \"8797858e-d6f0-44ea-b2e7-23aee40ef301\") " pod="kube-system/cilium-operator-6bc8ccdb58-x49px" Apr 12 19:02:50.878153 kubelet[2033]: I0412 19:02:50.878112 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9x2\" (UniqueName: \"kubernetes.io/projected/8797858e-d6f0-44ea-b2e7-23aee40ef301-kube-api-access-ml9x2\") pod \"cilium-operator-6bc8ccdb58-x49px\" (UID: \"8797858e-d6f0-44ea-b2e7-23aee40ef301\") " pod="kube-system/cilium-operator-6bc8ccdb58-x49px" Apr 12 19:02:51.876311 kubelet[2033]: E0412 19:02:51.876235 2033 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 12 19:02:51.877213 kubelet[2033]: E0412 19:02:51.876416 2033 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-config-path podName:b9f4357e-ef50-45e5-a473-b6d66a14c187 nodeName:}" failed. No retries permitted until 2024-04-12 19:02:52.376375489 +0000 UTC m=+16.256210929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-config-path") pod "cilium-8x8ls" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187") : failed to sync configmap cache: timed out waiting for the condition Apr 12 19:02:51.887545 kubelet[2033]: E0412 19:02:51.887480 2033 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 12 19:02:51.887545 kubelet[2033]: E0412 19:02:51.887545 2033 projected.go:198] Error preparing data for projected volume kube-api-access-wrmzw for pod kube-system/cilium-8x8ls: failed to sync configmap cache: timed out waiting for the condition Apr 12 19:02:51.887886 kubelet[2033]: E0412 19:02:51.887686 2033 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b9f4357e-ef50-45e5-a473-b6d66a14c187-kube-api-access-wrmzw podName:b9f4357e-ef50-45e5-a473-b6d66a14c187 nodeName:}" failed. No retries permitted until 2024-04-12 19:02:52.387652975 +0000 UTC m=+16.267488421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wrmzw" (UniqueName: "kubernetes.io/projected/b9f4357e-ef50-45e5-a473-b6d66a14c187-kube-api-access-wrmzw") pod "cilium-8x8ls" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187") : failed to sync configmap cache: timed out waiting for the condition Apr 12 19:02:51.888025 kubelet[2033]: E0412 19:02:51.887480 2033 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 12 19:02:51.888025 kubelet[2033]: E0412 19:02:51.888015 2033 projected.go:198] Error preparing data for projected volume kube-api-access-zrffl for pod kube-system/kube-proxy-tjkzf: failed to sync configmap cache: timed out waiting for the condition Apr 12 19:02:51.888162 kubelet[2033]: E0412 19:02:51.888086 2033 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2df469f8-7107-49ef-9f32-c79221b6243c-kube-api-access-zrffl podName:2df469f8-7107-49ef-9f32-c79221b6243c nodeName:}" failed. No retries permitted until 2024-04-12 19:02:52.388062744 +0000 UTC m=+16.267898195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zrffl" (UniqueName: "kubernetes.io/projected/2df469f8-7107-49ef-9f32-c79221b6243c-kube-api-access-zrffl") pod "kube-proxy-tjkzf" (UID: "2df469f8-7107-49ef-9f32-c79221b6243c") : failed to sync configmap cache: timed out waiting for the condition Apr 12 19:02:51.980981 kubelet[2033]: E0412 19:02:51.980909 2033 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 12 19:02:51.981278 kubelet[2033]: E0412 19:02:51.981107 2033 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8797858e-d6f0-44ea-b2e7-23aee40ef301-cilium-config-path podName:8797858e-d6f0-44ea-b2e7-23aee40ef301 nodeName:}" failed. No retries permitted until 2024-04-12 19:02:52.481028959 +0000 UTC m=+16.360864403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/8797858e-d6f0-44ea-b2e7-23aee40ef301-cilium-config-path") pod "cilium-operator-6bc8ccdb58-x49px" (UID: "8797858e-d6f0-44ea-b2e7-23aee40ef301") : failed to sync configmap cache: timed out waiting for the condition Apr 12 19:02:52.475095 env[1144]: time="2024-04-12T19:02:52.475022152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8x8ls,Uid:b9f4357e-ef50-45e5-a473-b6d66a14c187,Namespace:kube-system,Attempt:0,}" Apr 12 19:02:52.503496 env[1144]: time="2024-04-12T19:02:52.503119829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 19:02:52.503496 env[1144]: time="2024-04-12T19:02:52.503266896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 19:02:52.503496 env[1144]: time="2024-04-12T19:02:52.503334218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 19:02:52.503901 env[1144]: time="2024-04-12T19:02:52.503608592Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699 pid=2138 runtime=io.containerd.runc.v2 Apr 12 19:02:52.506199 env[1144]: time="2024-04-12T19:02:52.506139506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tjkzf,Uid:2df469f8-7107-49ef-9f32-c79221b6243c,Namespace:kube-system,Attempt:0,}" Apr 12 19:02:52.527760 systemd[1]: Started cri-containerd-c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699.scope. Apr 12 19:02:52.553052 env[1144]: time="2024-04-12T19:02:52.552941503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 19:02:52.553376 env[1144]: time="2024-04-12T19:02:52.553331125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 19:02:52.553577 env[1144]: time="2024-04-12T19:02:52.553541378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 19:02:52.554166 env[1144]: time="2024-04-12T19:02:52.554104467Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be111fcd8dfc14223eb029b1a78991eec86e8b903a2c5d0df02eeefa1a97bf3f pid=2166 runtime=io.containerd.runc.v2 Apr 12 19:02:52.588901 env[1144]: time="2024-04-12T19:02:52.588838497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-x49px,Uid:8797858e-d6f0-44ea-b2e7-23aee40ef301,Namespace:kube-system,Attempt:0,}" Apr 12 19:02:52.602582 systemd[1]: Started cri-containerd-be111fcd8dfc14223eb029b1a78991eec86e8b903a2c5d0df02eeefa1a97bf3f.scope. Apr 12 19:02:52.663266 env[1144]: time="2024-04-12T19:02:52.663197926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8x8ls,Uid:b9f4357e-ef50-45e5-a473-b6d66a14c187,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\"" Apr 12 19:02:52.672649 env[1144]: time="2024-04-12T19:02:52.672516926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 19:02:52.672962 kubelet[2033]: E0412 19:02:52.672610 2033 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Apr 12 19:02:52.673590 env[1144]: time="2024-04-12T19:02:52.673534283Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 19:02:52.676525 env[1144]: time="2024-04-12T19:02:52.676437274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 19:02:52.676749 env[1144]: time="2024-04-12T19:02:52.676549427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 19:02:52.677121 env[1144]: time="2024-04-12T19:02:52.677055187Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8 pid=2214 runtime=io.containerd.runc.v2 Apr 12 19:02:52.701774 env[1144]: time="2024-04-12T19:02:52.689827995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tjkzf,Uid:2df469f8-7107-49ef-9f32-c79221b6243c,Namespace:kube-system,Attempt:0,} returns sandbox id \"be111fcd8dfc14223eb029b1a78991eec86e8b903a2c5d0df02eeefa1a97bf3f\"" Apr 12 19:02:52.702485 env[1144]: time="2024-04-12T19:02:52.702382787Z" level=info msg="CreateContainer within sandbox \"be111fcd8dfc14223eb029b1a78991eec86e8b903a2c5d0df02eeefa1a97bf3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 19:02:52.736517 systemd[1]: Started cri-containerd-d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8.scope. Apr 12 19:02:52.749733 env[1144]: time="2024-04-12T19:02:52.749652513Z" level=info msg="CreateContainer within sandbox \"be111fcd8dfc14223eb029b1a78991eec86e8b903a2c5d0df02eeefa1a97bf3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f307762e926c3cd36050523931e3d0c1a20b44bb8dc879ee03fc96035f5d7183\"" Apr 12 19:02:52.754362 env[1144]: time="2024-04-12T19:02:52.754301949Z" level=info msg="StartContainer for \"f307762e926c3cd36050523931e3d0c1a20b44bb8dc879ee03fc96035f5d7183\"" Apr 12 19:02:52.798082 systemd[1]: Started cri-containerd-f307762e926c3cd36050523931e3d0c1a20b44bb8dc879ee03fc96035f5d7183.scope. Apr 12 19:02:52.863934 env[1144]: time="2024-04-12T19:02:52.863865276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-x49px,Uid:8797858e-d6f0-44ea-b2e7-23aee40ef301,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\"" Apr 12 19:02:52.877857 env[1144]: time="2024-04-12T19:02:52.876739768Z" level=info msg="StartContainer for \"f307762e926c3cd36050523931e3d0c1a20b44bb8dc879ee03fc96035f5d7183\" returns successfully" Apr 12 19:02:53.472424 kubelet[2033]: I0412 19:02:53.472330 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tjkzf" podStartSLOduration=3.4722671 podCreationTimestamp="2024-04-12 19:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 19:02:53.471613285 +0000 UTC m=+17.351448737" watchObservedRunningTime="2024-04-12 19:02:53.4722671 +0000 UTC m=+17.352102550" Apr 12 19:02:53.573963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624828012.mount: Deactivated successfully. Apr 12 19:02:58.429140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711747429.mount: Deactivated successfully. Apr 12 19:03:01.857571 env[1144]: time="2024-04-12T19:03:01.857501682Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:03:01.861003 env[1144]: time="2024-04-12T19:03:01.860948322Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:03:01.863220 env[1144]: time="2024-04-12T19:03:01.863174876Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:03:01.864143 env[1144]: time="2024-04-12T19:03:01.864088078Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 12 19:03:01.867613 env[1144]: time="2024-04-12T19:03:01.867549364Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 19:03:01.869345 env[1144]: time="2024-04-12T19:03:01.869291582Z" level=info msg="CreateContainer within sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 19:03:01.887739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052567533.mount: Deactivated successfully. Apr 12 19:03:01.893685 env[1144]: time="2024-04-12T19:03:01.893604976Z" level=info msg="CreateContainer within sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec\"" Apr 12 19:03:01.894998 env[1144]: time="2024-04-12T19:03:01.894960844Z" level=info msg="StartContainer for \"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec\"" Apr 12 19:03:01.935304 systemd[1]: Started cri-containerd-29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec.scope. Apr 12 19:03:01.995870 env[1144]: time="2024-04-12T19:03:01.994262981Z" level=info msg="StartContainer for \"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec\" returns successfully" Apr 12 19:03:02.005447 systemd[1]: cri-containerd-29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec.scope: Deactivated successfully. Apr 12 19:03:02.883376 systemd[1]: run-containerd-runc-k8s.io-29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec-runc.q1ZKSg.mount: Deactivated successfully. Apr 12 19:03:02.884059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec-rootfs.mount: Deactivated successfully. Apr 12 19:03:03.847211 env[1144]: time="2024-04-12T19:03:03.847112738Z" level=info msg="shim disconnected" id=29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec Apr 12 19:03:03.847211 env[1144]: time="2024-04-12T19:03:03.847207658Z" level=warning msg="cleaning up after shim disconnected" id=29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec namespace=k8s.io Apr 12 19:03:03.847211 env[1144]: time="2024-04-12T19:03:03.847222962Z" level=info msg="cleaning up dead shim" Apr 12 19:03:03.861682 env[1144]: time="2024-04-12T19:03:03.861603550Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:03:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2465 runtime=io.containerd.runc.v2\n" Apr 12 19:03:04.285311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575674924.mount: Deactivated successfully. Apr 12 19:03:04.517362 env[1144]: time="2024-04-12T19:03:04.517291347Z" level=info msg="CreateContainer within sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 19:03:04.565921 env[1144]: time="2024-04-12T19:03:04.564079456Z" level=info msg="CreateContainer within sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b\"" Apr 12 19:03:04.564426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766663930.mount: Deactivated successfully. Apr 12 19:03:04.570172 env[1144]: time="2024-04-12T19:03:04.566507519Z" level=info msg="StartContainer for \"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b\"" Apr 12 19:03:04.605076 systemd[1]: Started cri-containerd-bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b.scope. Apr 12 19:03:04.658061 env[1144]: time="2024-04-12T19:03:04.657978371Z" level=info msg="StartContainer for \"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b\" returns successfully" Apr 12 19:03:04.682232 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 19:03:04.682656 systemd[1]: Stopped systemd-sysctl.service. Apr 12 19:03:04.684999 systemd[1]: Stopping systemd-sysctl.service... Apr 12 19:03:04.689267 systemd[1]: Starting systemd-sysctl.service... Apr 12 19:03:04.695871 systemd[1]: cri-containerd-bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b.scope: Deactivated successfully. Apr 12 19:03:04.717812 systemd[1]: Finished systemd-sysctl.service. Apr 12 19:03:04.770687 env[1144]: time="2024-04-12T19:03:04.770606706Z" level=info msg="shim disconnected" id=bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b Apr 12 19:03:04.770687 env[1144]: time="2024-04-12T19:03:04.770685971Z" level=warning msg="cleaning up after shim disconnected" id=bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b namespace=k8s.io Apr 12 19:03:04.771189 env[1144]: time="2024-04-12T19:03:04.770700576Z" level=info msg="cleaning up dead shim" Apr 12 19:03:04.799999 env[1144]: time="2024-04-12T19:03:04.799921045Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:03:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2529 runtime=io.containerd.runc.v2\n" Apr 12 19:03:05.361979 env[1144]: time="2024-04-12T19:03:05.361902869Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:03:05.364930 env[1144]: time="2024-04-12T19:03:05.364882185Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:03:05.367437 env[1144]: time="2024-04-12T19:03:05.367393923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 19:03:05.368193 env[1144]: time="2024-04-12T19:03:05.368138741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 12 19:03:05.372540 env[1144]: time="2024-04-12T19:03:05.371951246Z" level=info msg="CreateContainer within sandbox \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 19:03:05.389782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322792795.mount: Deactivated successfully. Apr 12 19:03:05.396774 env[1144]: time="2024-04-12T19:03:05.396727377Z" level=info msg="CreateContainer within sandbox \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\"" Apr 12 19:03:05.398031 env[1144]: time="2024-04-12T19:03:05.397970753Z" level=info msg="StartContainer for \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\"" Apr 12 19:03:05.432503 systemd[1]: Started cri-containerd-07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85.scope. Apr 12 19:03:05.475499 env[1144]: time="2024-04-12T19:03:05.475392245Z" level=info msg="StartContainer for \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\" returns successfully" Apr 12 19:03:05.515859 env[1144]: time="2024-04-12T19:03:05.515788196Z" level=info msg="CreateContainer within sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 19:03:05.529831 kubelet[2033]: I0412 19:03:05.529743 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-x49px" podStartSLOduration=3.030148309 podCreationTimestamp="2024-04-12 19:02:50 +0000 UTC" firstStartedPulling="2024-04-12 19:02:52.868998423 +0000 UTC m=+16.748833873" lastFinishedPulling="2024-04-12 19:03:05.368533317 +0000 UTC m=+29.248368762" observedRunningTime="2024-04-12 19:03:05.529093828 +0000 UTC m=+29.408929281" watchObservedRunningTime="2024-04-12 19:03:05.529683198 +0000 UTC m=+29.409518659" Apr 12 19:03:05.554115 env[1144]: time="2024-04-12T19:03:05.554046570Z" level=info msg="CreateContainer within sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62\"" Apr 12 19:03:05.554957 env[1144]: time="2024-04-12T19:03:05.554903162Z" level=info msg="StartContainer for \"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62\"" Apr 12 19:03:05.595628 systemd[1]: Started cri-containerd-de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62.scope. Apr 12 19:03:05.667077 env[1144]: time="2024-04-12T19:03:05.666980861Z" level=info msg="StartContainer for \"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62\" returns successfully" Apr 12 19:03:05.680619 systemd[1]: cri-containerd-de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62.scope: Deactivated successfully. Apr 12 19:03:05.865027 env[1144]: time="2024-04-12T19:03:05.864933289Z" level=info msg="shim disconnected" id=de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62 Apr 12 19:03:05.865027 env[1144]: time="2024-04-12T19:03:05.865026261Z" level=warning msg="cleaning up after shim disconnected" id=de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62 namespace=k8s.io Apr 12 19:03:05.865412 env[1144]: time="2024-04-12T19:03:05.865041981Z" level=info msg="cleaning up dead shim" Apr 12 19:03:05.886569 env[1144]: time="2024-04-12T19:03:05.886508428Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:03:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2623 runtime=io.containerd.runc.v2\n" Apr 12 19:03:06.516863 env[1144]: time="2024-04-12T19:03:06.516788353Z" level=info msg="CreateContainer within sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 19:03:06.541311 env[1144]: time="2024-04-12T19:03:06.541245037Z" level=info msg="CreateContainer within sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8\"" Apr 12 19:03:06.542030 env[1144]: time="2024-04-12T19:03:06.541977036Z" level=info msg="StartContainer for \"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8\"" Apr 12 19:03:06.579934 systemd[1]: Started cri-containerd-1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8.scope. Apr 12 19:03:06.664293 env[1144]: time="2024-04-12T19:03:06.664228206Z" level=info msg="StartContainer for \"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8\" returns successfully" Apr 12 19:03:06.671272 systemd[1]: cri-containerd-1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8.scope: Deactivated successfully. Apr 12 19:03:06.723893 env[1144]: time="2024-04-12T19:03:06.723830439Z" level=info msg="shim disconnected" id=1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8 Apr 12 19:03:06.724330 env[1144]: time="2024-04-12T19:03:06.724297361Z" level=warning msg="cleaning up after shim disconnected" id=1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8 namespace=k8s.io Apr 12 19:03:06.724500 env[1144]: time="2024-04-12T19:03:06.724476610Z" level=info msg="cleaning up dead shim" Apr 12 19:03:06.724977 env[1144]: time="2024-04-12T19:03:06.724868463Z" level=error msg="collecting metrics for 1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8" error="ttrpc: closed: unknown" Apr 12 19:03:06.755349 env[1144]: time="2024-04-12T19:03:06.755295094Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:03:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2679 runtime=io.containerd.runc.v2\n" Apr 12 19:03:07.274176 systemd[1]: run-containerd-runc-k8s.io-1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8-runc.KbgFWB.mount: Deactivated successfully. Apr 12 19:03:07.274348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8-rootfs.mount: Deactivated successfully. Apr 12 19:03:07.526483 env[1144]: time="2024-04-12T19:03:07.524058213Z" level=info msg="CreateContainer within sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 19:03:07.550911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848838502.mount: Deactivated successfully. Apr 12 19:03:07.561675 env[1144]: time="2024-04-12T19:03:07.561586768Z" level=info msg="CreateContainer within sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\"" Apr 12 19:03:07.562708 env[1144]: time="2024-04-12T19:03:07.562657501Z" level=info msg="StartContainer for \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\"" Apr 12 19:03:07.603980 systemd[1]: Started cri-containerd-8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af.scope. Apr 12 19:03:07.668373 env[1144]: time="2024-04-12T19:03:07.668252204Z" level=info msg="StartContainer for \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\" returns successfully" Apr 12 19:03:07.958264 kubelet[2033]: I0412 19:03:07.957117 2033 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Apr 12 19:03:07.991464 kubelet[2033]: I0412 19:03:07.991417 2033 topology_manager.go:215] "Topology Admit Handler" podUID="d9a508fd-d9c0-43c3-8677-86ee8d6ae365" podNamespace="kube-system" podName="coredns-5dd5756b68-5ckhn" Apr 12 19:03:08.000189 systemd[1]: Created slice kubepods-burstable-podd9a508fd_d9c0_43c3_8677_86ee8d6ae365.slice. Apr 12 19:03:08.013133 kubelet[2033]: I0412 19:03:08.013086 2033 topology_manager.go:215] "Topology Admit Handler" podUID="fdb2eaa9-3948-4c24-8492-a3d5deed02de" podNamespace="kube-system" podName="coredns-5dd5756b68-qxj52" Apr 12 19:03:08.021106 systemd[1]: Created slice kubepods-burstable-podfdb2eaa9_3948_4c24_8492_a3d5deed02de.slice. Apr 12 19:03:08.120095 kubelet[2033]: I0412 19:03:08.120031 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdb2eaa9-3948-4c24-8492-a3d5deed02de-config-volume\") pod \"coredns-5dd5756b68-qxj52\" (UID: \"fdb2eaa9-3948-4c24-8492-a3d5deed02de\") " pod="kube-system/coredns-5dd5756b68-qxj52" Apr 12 19:03:08.120355 kubelet[2033]: I0412 19:03:08.120122 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x72h\" (UniqueName: \"kubernetes.io/projected/d9a508fd-d9c0-43c3-8677-86ee8d6ae365-kube-api-access-7x72h\") pod \"coredns-5dd5756b68-5ckhn\" (UID: \"d9a508fd-d9c0-43c3-8677-86ee8d6ae365\") " pod="kube-system/coredns-5dd5756b68-5ckhn" Apr 12 19:03:08.120355 kubelet[2033]: I0412 19:03:08.120162 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a508fd-d9c0-43c3-8677-86ee8d6ae365-config-volume\") pod \"coredns-5dd5756b68-5ckhn\" (UID: \"d9a508fd-d9c0-43c3-8677-86ee8d6ae365\") " pod="kube-system/coredns-5dd5756b68-5ckhn" Apr 12 19:03:08.120355 kubelet[2033]: I0412 19:03:08.120199 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x755w\" (UniqueName: \"kubernetes.io/projected/fdb2eaa9-3948-4c24-8492-a3d5deed02de-kube-api-access-x755w\") pod \"coredns-5dd5756b68-qxj52\" (UID: \"fdb2eaa9-3948-4c24-8492-a3d5deed02de\") " pod="kube-system/coredns-5dd5756b68-qxj52" Apr 12 19:03:08.306373 env[1144]: time="2024-04-12T19:03:08.305660650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5ckhn,Uid:d9a508fd-d9c0-43c3-8677-86ee8d6ae365,Namespace:kube-system,Attempt:0,}" Apr 12 19:03:08.328558 env[1144]: time="2024-04-12T19:03:08.328494979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qxj52,Uid:fdb2eaa9-3948-4c24-8492-a3d5deed02de,Namespace:kube-system,Attempt:0,}" Apr 12 19:03:08.552140 kubelet[2033]: I0412 19:03:08.552101 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8x8ls" podStartSLOduration=9.353067917 podCreationTimestamp="2024-04-12 19:02:50 +0000 UTC" firstStartedPulling="2024-04-12 19:02:52.665780524 +0000 UTC m=+16.545615948" lastFinishedPulling="2024-04-12 19:03:01.864643758 +0000 UTC m=+25.744479202" observedRunningTime="2024-04-12 19:03:08.550353698 +0000 UTC m=+32.430189149" watchObservedRunningTime="2024-04-12 19:03:08.551931171 +0000 UTC m=+32.431766621" Apr 12 19:03:10.241428 systemd-networkd[1023]: cilium_host: Link UP Apr 12 19:03:10.244947 systemd-networkd[1023]: cilium_net: Link UP Apr 12 19:03:10.250987 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Apr 12 19:03:10.261178 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 19:03:10.251981 systemd-networkd[1023]: cilium_net: Gained carrier Apr 12 19:03:10.262457 systemd-networkd[1023]: cilium_host: Gained carrier Apr 12 19:03:10.411721 systemd-networkd[1023]: cilium_vxlan: Link UP Apr 12 19:03:10.411742 systemd-networkd[1023]: cilium_vxlan: Gained carrier Apr 12 19:03:10.708841 kernel: NET: Registered PF_ALG protocol family Apr 12 19:03:10.841609 systemd-networkd[1023]: cilium_host: Gained IPv6LL Apr 12 19:03:11.161574 systemd-networkd[1023]: cilium_net: Gained IPv6LL Apr 12 19:03:11.632970 systemd-networkd[1023]: lxc_health: Link UP Apr 12 19:03:11.660898 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 19:03:11.661454 systemd-networkd[1023]: lxc_health: Gained carrier Apr 12 19:03:11.881432 systemd-networkd[1023]: lxc568f2afc0bd7: Link UP Apr 12 19:03:11.899829 kernel: eth0: renamed from tmpf205e Apr 12 19:03:11.914831 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc568f2afc0bd7: link becomes ready Apr 12 19:03:11.917014 systemd-networkd[1023]: lxc568f2afc0bd7: Gained carrier Apr 12 19:03:11.947989 systemd-networkd[1023]: lxc50928cc21772: Link UP Apr 12 19:03:11.973549 kernel: eth0: renamed from tmp67d3a Apr 12 19:03:11.988844 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc50928cc21772: link becomes ready Apr 12 19:03:11.989486 systemd-networkd[1023]: lxc50928cc21772: Gained carrier Apr 12 19:03:12.121581 systemd-networkd[1023]: cilium_vxlan: Gained IPv6LL Apr 12 19:03:13.145609 systemd-networkd[1023]: lxc_health: Gained IPv6LL Apr 12 19:03:13.721043 systemd-networkd[1023]: lxc50928cc21772: Gained IPv6LL Apr 12 19:03:13.785984 systemd-networkd[1023]: lxc568f2afc0bd7: Gained IPv6LL Apr 12 19:03:17.251591 env[1144]: time="2024-04-12T19:03:17.251443208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 19:03:17.252361 env[1144]: time="2024-04-12T19:03:17.252301731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 19:03:17.252648 env[1144]: time="2024-04-12T19:03:17.252602221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 19:03:17.253097 env[1144]: time="2024-04-12T19:03:17.253039355Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f205e8392f0b3e6198ce02ce8d6fee5df82b05b073890276c9c700b0334cb5e4 pid=3221 runtime=io.containerd.runc.v2 Apr 12 19:03:17.270147 env[1144]: time="2024-04-12T19:03:17.269788442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 19:03:17.270147 env[1144]: time="2024-04-12T19:03:17.269860491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 19:03:17.270147 env[1144]: time="2024-04-12T19:03:17.269881400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 19:03:17.273192 env[1144]: time="2024-04-12T19:03:17.272309793Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67d3a00fbffe21e788b29f1fc88f11d0554414351c4b65393b233f03fdcd8b10 pid=3224 runtime=io.containerd.runc.v2 Apr 12 19:03:17.307113 systemd[1]: Started cri-containerd-f205e8392f0b3e6198ce02ce8d6fee5df82b05b073890276c9c700b0334cb5e4.scope. Apr 12 19:03:17.358099 systemd[1]: Started cri-containerd-67d3a00fbffe21e788b29f1fc88f11d0554414351c4b65393b233f03fdcd8b10.scope. Apr 12 19:03:17.430136 env[1144]: time="2024-04-12T19:03:17.430079625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5ckhn,Uid:d9a508fd-d9c0-43c3-8677-86ee8d6ae365,Namespace:kube-system,Attempt:0,} returns sandbox id \"f205e8392f0b3e6198ce02ce8d6fee5df82b05b073890276c9c700b0334cb5e4\"" Apr 12 19:03:17.434107 env[1144]: time="2024-04-12T19:03:17.434056536Z" level=info msg="CreateContainer within sandbox \"f205e8392f0b3e6198ce02ce8d6fee5df82b05b073890276c9c700b0334cb5e4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 19:03:17.465831 env[1144]: time="2024-04-12T19:03:17.463674831Z" level=info msg="CreateContainer within sandbox \"f205e8392f0b3e6198ce02ce8d6fee5df82b05b073890276c9c700b0334cb5e4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25f91a75ff239c680588f965d38250826e60ef7101d24755e1958a401acb2e30\"" Apr 12 19:03:17.465831 env[1144]: time="2024-04-12T19:03:17.464827929Z" level=info msg="StartContainer for \"25f91a75ff239c680588f965d38250826e60ef7101d24755e1958a401acb2e30\"" Apr 12 19:03:17.468214 env[1144]: time="2024-04-12T19:03:17.466344540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qxj52,Uid:fdb2eaa9-3948-4c24-8492-a3d5deed02de,Namespace:kube-system,Attempt:0,} returns sandbox id \"67d3a00fbffe21e788b29f1fc88f11d0554414351c4b65393b233f03fdcd8b10\"" Apr 12 19:03:17.470636 env[1144]: time="2024-04-12T19:03:17.470557231Z" level=info msg="CreateContainer within sandbox \"67d3a00fbffe21e788b29f1fc88f11d0554414351c4b65393b233f03fdcd8b10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 19:03:17.491836 env[1144]: time="2024-04-12T19:03:17.488508610Z" level=info msg="CreateContainer within sandbox \"67d3a00fbffe21e788b29f1fc88f11d0554414351c4b65393b233f03fdcd8b10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4930fa301ab3d60ed1516790eb0e1ef97db03672e3818729bac1cb9765c3e08\"" Apr 12 19:03:17.491836 env[1144]: time="2024-04-12T19:03:17.489608621Z" level=info msg="StartContainer for \"d4930fa301ab3d60ed1516790eb0e1ef97db03672e3818729bac1cb9765c3e08\"" Apr 12 19:03:17.511001 systemd[1]: Started cri-containerd-25f91a75ff239c680588f965d38250826e60ef7101d24755e1958a401acb2e30.scope. Apr 12 19:03:17.550776 systemd[1]: Started cri-containerd-d4930fa301ab3d60ed1516790eb0e1ef97db03672e3818729bac1cb9765c3e08.scope. Apr 12 19:03:17.612712 env[1144]: time="2024-04-12T19:03:17.612655032Z" level=info msg="StartContainer for \"25f91a75ff239c680588f965d38250826e60ef7101d24755e1958a401acb2e30\" returns successfully" Apr 12 19:03:17.653984 env[1144]: time="2024-04-12T19:03:17.653923170Z" level=info msg="StartContainer for \"d4930fa301ab3d60ed1516790eb0e1ef97db03672e3818729bac1cb9765c3e08\" returns successfully" Apr 12 19:03:18.262714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351217031.mount: Deactivated successfully. Apr 12 19:03:18.609090 kubelet[2033]: I0412 19:03:18.608890 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5ckhn" podStartSLOduration=28.608832305 podCreationTimestamp="2024-04-12 19:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 19:03:18.60757929 +0000 UTC m=+42.487414738" watchObservedRunningTime="2024-04-12 19:03:18.608832305 +0000 UTC m=+42.488667753" Apr 12 19:03:31.917104 systemd[1]: Started sshd@6-10.128.0.35:22-139.178.89.65:53298.service. Apr 12 19:03:32.266290 sshd[3381]: Accepted publickey for core from 139.178.89.65 port 53298 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:03:32.268776 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:03:32.276373 systemd-logind[1127]: New session 6 of user core. Apr 12 19:03:32.276888 systemd[1]: Started session-6.scope. Apr 12 19:03:32.612132 sshd[3381]: pam_unix(sshd:session): session closed for user core Apr 12 19:03:32.617888 systemd[1]: sshd@6-10.128.0.35:22-139.178.89.65:53298.service: Deactivated successfully. Apr 12 19:03:32.619367 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 19:03:32.620595 systemd-logind[1127]: Session 6 logged out. Waiting for processes to exit. Apr 12 19:03:32.622347 systemd-logind[1127]: Removed session 6. Apr 12 19:03:37.670407 systemd[1]: Started sshd@7-10.128.0.35:22-139.178.89.65:47092.service. Apr 12 19:03:38.023230 sshd[3397]: Accepted publickey for core from 139.178.89.65 port 47092 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:03:38.025502 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:03:38.033460 systemd-logind[1127]: New session 7 of user core. Apr 12 19:03:38.035429 systemd[1]: Started session-7.scope. Apr 12 19:03:38.357303 sshd[3397]: pam_unix(sshd:session): session closed for user core Apr 12 19:03:38.362441 systemd[1]: sshd@7-10.128.0.35:22-139.178.89.65:47092.service: Deactivated successfully. Apr 12 19:03:38.363973 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 19:03:38.365122 systemd-logind[1127]: Session 7 logged out. Waiting for processes to exit. Apr 12 19:03:38.367168 systemd-logind[1127]: Removed session 7. Apr 12 19:03:43.414768 systemd[1]: Started sshd@8-10.128.0.35:22-139.178.89.65:47100.service. Apr 12 19:03:43.764252 sshd[3409]: Accepted publickey for core from 139.178.89.65 port 47100 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:03:43.766160 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:03:43.773902 systemd-logind[1127]: New session 8 of user core. Apr 12 19:03:43.774731 systemd[1]: Started session-8.scope. Apr 12 19:03:44.096143 sshd[3409]: pam_unix(sshd:session): session closed for user core Apr 12 19:03:44.101710 systemd[1]: sshd@8-10.128.0.35:22-139.178.89.65:47100.service: Deactivated successfully. Apr 12 19:03:44.103169 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 19:03:44.104355 systemd-logind[1127]: Session 8 logged out. Waiting for processes to exit. Apr 12 19:03:44.105686 systemd-logind[1127]: Removed session 8. Apr 12 19:03:49.156497 systemd[1]: Started sshd@9-10.128.0.35:22-139.178.89.65:35612.service. Apr 12 19:03:49.518289 sshd[3423]: Accepted publickey for core from 139.178.89.65 port 35612 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:03:49.520686 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:03:49.528573 systemd-logind[1127]: New session 9 of user core. Apr 12 19:03:49.529593 systemd[1]: Started session-9.scope. Apr 12 19:03:49.853327 sshd[3423]: pam_unix(sshd:session): session closed for user core Apr 12 19:03:49.858608 systemd-logind[1127]: Session 9 logged out. Waiting for processes to exit. Apr 12 19:03:49.859114 systemd[1]: sshd@9-10.128.0.35:22-139.178.89.65:35612.service: Deactivated successfully. Apr 12 19:03:49.860507 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 19:03:49.861978 systemd-logind[1127]: Removed session 9. Apr 12 19:03:54.910262 systemd[1]: Started sshd@10-10.128.0.35:22-139.178.89.65:35626.service. Apr 12 19:03:55.258525 sshd[3437]: Accepted publickey for core from 139.178.89.65 port 35626 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:03:55.260919 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:03:55.268572 systemd[1]: Started session-10.scope. Apr 12 19:03:55.270125 systemd-logind[1127]: New session 10 of user core. Apr 12 19:03:55.595706 sshd[3437]: pam_unix(sshd:session): session closed for user core Apr 12 19:03:55.601680 systemd-logind[1127]: Session 10 logged out. Waiting for processes to exit. Apr 12 19:03:55.601990 systemd[1]: sshd@10-10.128.0.35:22-139.178.89.65:35626.service: Deactivated successfully. Apr 12 19:03:55.603414 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 19:03:55.604647 systemd-logind[1127]: Removed session 10. Apr 12 19:03:55.653634 systemd[1]: Started sshd@11-10.128.0.35:22-139.178.89.65:35640.service. Apr 12 19:03:56.009765 sshd[3450]: Accepted publickey for core from 139.178.89.65 port 35640 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:03:56.012096 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:03:56.020016 systemd-logind[1127]: New session 11 of user core. Apr 12 19:03:56.021458 systemd[1]: Started session-11.scope. Apr 12 19:03:57.362880 sshd[3450]: pam_unix(sshd:session): session closed for user core Apr 12 19:03:57.369136 systemd-logind[1127]: Session 11 logged out. Waiting for processes to exit. Apr 12 19:03:57.370503 systemd[1]: sshd@11-10.128.0.35:22-139.178.89.65:35640.service: Deactivated successfully. Apr 12 19:03:57.371929 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 19:03:57.373205 systemd-logind[1127]: Removed session 11. Apr 12 19:03:57.419853 systemd[1]: Started sshd@12-10.128.0.35:22-139.178.89.65:56672.service. Apr 12 19:03:57.771518 sshd[3461]: Accepted publickey for core from 139.178.89.65 port 56672 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:03:57.774634 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:03:57.782623 systemd-logind[1127]: New session 12 of user core. Apr 12 19:03:57.783625 systemd[1]: Started session-12.scope. Apr 12 19:03:58.104763 sshd[3461]: pam_unix(sshd:session): session closed for user core Apr 12 19:03:58.110239 systemd[1]: sshd@12-10.128.0.35:22-139.178.89.65:56672.service: Deactivated successfully. Apr 12 19:03:58.111685 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 19:03:58.112862 systemd-logind[1127]: Session 12 logged out. Waiting for processes to exit. Apr 12 19:03:58.114325 systemd-logind[1127]: Removed session 12. Apr 12 19:04:03.163039 systemd[1]: Started sshd@13-10.128.0.35:22-139.178.89.65:56680.service. Apr 12 19:04:03.520124 sshd[3473]: Accepted publickey for core from 139.178.89.65 port 56680 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:03.522886 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:03.530891 systemd-logind[1127]: New session 13 of user core. Apr 12 19:04:03.531485 systemd[1]: Started session-13.scope. Apr 12 19:04:03.858140 sshd[3473]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:03.863763 systemd[1]: sshd@13-10.128.0.35:22-139.178.89.65:56680.service: Deactivated successfully. Apr 12 19:04:03.865206 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 19:04:03.866408 systemd-logind[1127]: Session 13 logged out. Waiting for processes to exit. Apr 12 19:04:03.867882 systemd-logind[1127]: Removed session 13. Apr 12 19:04:03.916115 systemd[1]: Started sshd@14-10.128.0.35:22-139.178.89.65:56686.service. Apr 12 19:04:04.270056 sshd[3485]: Accepted publickey for core from 139.178.89.65 port 56686 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:04.272567 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:04.280380 systemd[1]: Started session-14.scope. Apr 12 19:04:04.281739 systemd-logind[1127]: New session 14 of user core. Apr 12 19:04:04.685020 sshd[3485]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:04.689852 systemd[1]: sshd@14-10.128.0.35:22-139.178.89.65:56686.service: Deactivated successfully. Apr 12 19:04:04.691287 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 19:04:04.692485 systemd-logind[1127]: Session 14 logged out. Waiting for processes to exit. Apr 12 19:04:04.694896 systemd-logind[1127]: Removed session 14. Apr 12 19:04:04.740824 systemd[1]: Started sshd@15-10.128.0.35:22-139.178.89.65:56694.service. Apr 12 19:04:05.088925 sshd[3494]: Accepted publickey for core from 139.178.89.65 port 56694 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:05.090622 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:05.099013 systemd[1]: Started session-15.scope. Apr 12 19:04:05.100349 systemd-logind[1127]: New session 15 of user core. Apr 12 19:04:06.238380 sshd[3494]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:06.247478 systemd[1]: sshd@15-10.128.0.35:22-139.178.89.65:56694.service: Deactivated successfully. Apr 12 19:04:06.248999 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 19:04:06.250316 systemd-logind[1127]: Session 15 logged out. Waiting for processes to exit. Apr 12 19:04:06.251754 systemd-logind[1127]: Removed session 15. Apr 12 19:04:06.294123 systemd[1]: Started sshd@16-10.128.0.35:22-139.178.89.65:56704.service. Apr 12 19:04:06.640516 sshd[3512]: Accepted publickey for core from 139.178.89.65 port 56704 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:06.642718 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:06.650180 systemd[1]: Started session-16.scope. Apr 12 19:04:06.650880 systemd-logind[1127]: New session 16 of user core. Apr 12 19:04:07.213637 sshd[3512]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:07.219460 systemd[1]: sshd@16-10.128.0.35:22-139.178.89.65:56704.service: Deactivated successfully. Apr 12 19:04:07.220700 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 19:04:07.221636 systemd-logind[1127]: Session 16 logged out. Waiting for processes to exit. Apr 12 19:04:07.223140 systemd-logind[1127]: Removed session 16. Apr 12 19:04:07.269551 systemd[1]: Started sshd@17-10.128.0.35:22-139.178.89.65:51282.service. Apr 12 19:04:07.613652 sshd[3522]: Accepted publickey for core from 139.178.89.65 port 51282 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:07.615354 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:07.622621 systemd[1]: Started session-17.scope. Apr 12 19:04:07.623246 systemd-logind[1127]: New session 17 of user core. Apr 12 19:04:07.932074 sshd[3522]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:07.937669 systemd-logind[1127]: Session 17 logged out. Waiting for processes to exit. Apr 12 19:04:07.938900 systemd[1]: sshd@17-10.128.0.35:22-139.178.89.65:51282.service: Deactivated successfully. Apr 12 19:04:07.940121 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 19:04:07.941942 systemd-logind[1127]: Removed session 17. Apr 12 19:04:12.991414 systemd[1]: Started sshd@18-10.128.0.35:22-139.178.89.65:51296.service. Apr 12 19:04:13.342255 sshd[3537]: Accepted publickey for core from 139.178.89.65 port 51296 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:13.344247 sshd[3537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:13.352924 systemd-logind[1127]: New session 18 of user core. Apr 12 19:04:13.353105 systemd[1]: Started session-18.scope. Apr 12 19:04:13.669099 sshd[3537]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:13.674537 systemd[1]: sshd@18-10.128.0.35:22-139.178.89.65:51296.service: Deactivated successfully. Apr 12 19:04:13.676055 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 19:04:13.677260 systemd-logind[1127]: Session 18 logged out. Waiting for processes to exit. Apr 12 19:04:13.678689 systemd-logind[1127]: Removed session 18. Apr 12 19:04:18.726175 systemd[1]: Started sshd@19-10.128.0.35:22-139.178.89.65:60198.service. Apr 12 19:04:19.079269 sshd[3549]: Accepted publickey for core from 139.178.89.65 port 60198 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:19.082465 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:19.091418 systemd[1]: Started session-19.scope. Apr 12 19:04:19.092212 systemd-logind[1127]: New session 19 of user core. Apr 12 19:04:19.402947 sshd[3549]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:19.407918 systemd[1]: sshd@19-10.128.0.35:22-139.178.89.65:60198.service: Deactivated successfully. Apr 12 19:04:19.409486 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 19:04:19.411081 systemd-logind[1127]: Session 19 logged out. Waiting for processes to exit. Apr 12 19:04:19.412588 systemd-logind[1127]: Removed session 19. Apr 12 19:04:24.459250 systemd[1]: Started sshd@20-10.128.0.35:22-139.178.89.65:60208.service. Apr 12 19:04:24.806456 sshd[3563]: Accepted publickey for core from 139.178.89.65 port 60208 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:24.808935 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:24.817076 systemd[1]: Started session-20.scope. Apr 12 19:04:24.818095 systemd-logind[1127]: New session 20 of user core. Apr 12 19:04:25.129589 sshd[3563]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:25.134461 systemd-logind[1127]: Session 20 logged out. Waiting for processes to exit. Apr 12 19:04:25.134980 systemd[1]: sshd@20-10.128.0.35:22-139.178.89.65:60208.service: Deactivated successfully. Apr 12 19:04:25.136381 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 19:04:25.137722 systemd-logind[1127]: Removed session 20. Apr 12 19:04:25.186063 systemd[1]: Started sshd@21-10.128.0.35:22-139.178.89.65:60214.service. Apr 12 19:04:25.532077 sshd[3575]: Accepted publickey for core from 139.178.89.65 port 60214 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:25.534926 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:25.543059 systemd[1]: Started session-21.scope. Apr 12 19:04:25.543992 systemd-logind[1127]: New session 21 of user core. Apr 12 19:04:27.887887 kubelet[2033]: I0412 19:04:27.887816 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qxj52" podStartSLOduration=97.887732137 podCreationTimestamp="2024-04-12 19:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 19:03:18.64402128 +0000 UTC m=+42.523856729" watchObservedRunningTime="2024-04-12 19:04:27.887732137 +0000 UTC m=+111.767567587" Apr 12 19:04:27.909070 env[1144]: time="2024-04-12T19:04:27.908766580Z" level=info msg="StopContainer for \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\" with timeout 30 (s)" Apr 12 19:04:27.909746 env[1144]: time="2024-04-12T19:04:27.909485695Z" level=info msg="Stop container \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\" with signal terminated" Apr 12 19:04:27.933527 systemd[1]: run-containerd-runc-k8s.io-8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af-runc.JbOXPl.mount: Deactivated successfully. Apr 12 19:04:27.967468 systemd[1]: cri-containerd-07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85.scope: Deactivated successfully. Apr 12 19:04:27.973176 env[1144]: time="2024-04-12T19:04:27.973021361Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 19:04:27.985528 env[1144]: time="2024-04-12T19:04:27.985428652Z" level=info msg="StopContainer for \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\" with timeout 2 (s)" Apr 12 19:04:27.985891 env[1144]: time="2024-04-12T19:04:27.985837497Z" level=info msg="Stop container \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\" with signal terminated" Apr 12 19:04:28.003092 systemd-networkd[1023]: lxc_health: Link DOWN Apr 12 19:04:28.003106 systemd-networkd[1023]: lxc_health: Lost carrier Apr 12 19:04:28.025849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85-rootfs.mount: Deactivated successfully. Apr 12 19:04:28.035678 systemd[1]: cri-containerd-8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af.scope: Deactivated successfully. Apr 12 19:04:28.036107 systemd[1]: cri-containerd-8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af.scope: Consumed 10.008s CPU time. Apr 12 19:04:28.061322 env[1144]: time="2024-04-12T19:04:28.061244043Z" level=info msg="shim disconnected" id=07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85 Apr 12 19:04:28.061322 env[1144]: time="2024-04-12T19:04:28.061321187Z" level=warning msg="cleaning up after shim disconnected" id=07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85 namespace=k8s.io Apr 12 19:04:28.061837 env[1144]: time="2024-04-12T19:04:28.061340008Z" level=info msg="cleaning up dead shim" Apr 12 19:04:28.086497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af-rootfs.mount: Deactivated successfully. Apr 12 19:04:28.089101 env[1144]: time="2024-04-12T19:04:28.089037824Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:04:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3636 runtime=io.containerd.runc.v2\n" Apr 12 19:04:28.095077 env[1144]: time="2024-04-12T19:04:28.095012393Z" level=info msg="StopContainer for \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\" returns successfully" Apr 12 19:04:28.096115 env[1144]: time="2024-04-12T19:04:28.095924410Z" level=info msg="shim disconnected" id=8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af Apr 12 19:04:28.097349 env[1144]: time="2024-04-12T19:04:28.097321872Z" level=warning msg="cleaning up after shim disconnected" id=8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af namespace=k8s.io Apr 12 19:04:28.097518 env[1144]: time="2024-04-12T19:04:28.097501478Z" level=info msg="cleaning up dead shim" Apr 12 19:04:28.097957 env[1144]: time="2024-04-12T19:04:28.097257755Z" level=info msg="StopPodSandbox for \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\"" Apr 12 19:04:28.098236 env[1144]: time="2024-04-12T19:04:28.098196565Z" level=info msg="Container to stop \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 19:04:28.101597 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8-shm.mount: Deactivated successfully. Apr 12 19:04:28.119122 systemd[1]: cri-containerd-d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8.scope: Deactivated successfully. Apr 12 19:04:28.128161 env[1144]: time="2024-04-12T19:04:28.128098269Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:04:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3654 runtime=io.containerd.runc.v2\n" Apr 12 19:04:28.131438 env[1144]: time="2024-04-12T19:04:28.131391359Z" level=info msg="StopContainer for \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\" returns successfully" Apr 12 19:04:28.132432 env[1144]: time="2024-04-12T19:04:28.132394228Z" level=info msg="StopPodSandbox for \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\"" Apr 12 19:04:28.132671 env[1144]: time="2024-04-12T19:04:28.132639491Z" level=info msg="Container to stop \"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 19:04:28.132777 env[1144]: time="2024-04-12T19:04:28.132754795Z" level=info msg="Container to stop \"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 19:04:28.133125 env[1144]: time="2024-04-12T19:04:28.133086555Z" level=info msg="Container to stop \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 19:04:28.133330 env[1144]: time="2024-04-12T19:04:28.133296136Z" level=info msg="Container to stop \"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 19:04:28.133525 env[1144]: time="2024-04-12T19:04:28.133490960Z" level=info msg="Container to stop \"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 19:04:28.147850 systemd[1]: cri-containerd-c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699.scope: Deactivated successfully. Apr 12 19:04:28.190851 env[1144]: time="2024-04-12T19:04:28.190732404Z" level=info msg="shim disconnected" id=d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8 Apr 12 19:04:28.190851 env[1144]: time="2024-04-12T19:04:28.190823256Z" level=warning msg="cleaning up after shim disconnected" id=d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8 namespace=k8s.io Apr 12 19:04:28.190851 env[1144]: time="2024-04-12T19:04:28.190841336Z" level=info msg="cleaning up dead shim" Apr 12 19:04:28.198731 env[1144]: time="2024-04-12T19:04:28.198652113Z" level=info msg="shim disconnected" id=c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699 Apr 12 19:04:28.199221 env[1144]: time="2024-04-12T19:04:28.199168657Z" level=warning msg="cleaning up after shim disconnected" id=c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699 namespace=k8s.io Apr 12 19:04:28.199412 env[1144]: time="2024-04-12T19:04:28.199384534Z" level=info msg="cleaning up dead shim" Apr 12 19:04:28.206447 env[1144]: time="2024-04-12T19:04:28.206372384Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:04:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3706 runtime=io.containerd.runc.v2\n" Apr 12 19:04:28.206926 env[1144]: time="2024-04-12T19:04:28.206877673Z" level=info msg="TearDown network for sandbox \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\" successfully" Apr 12 19:04:28.206926 env[1144]: time="2024-04-12T19:04:28.206923270Z" level=info msg="StopPodSandbox for \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\" returns successfully" Apr 12 19:04:28.231855 env[1144]: time="2024-04-12T19:04:28.231332730Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:04:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3714 runtime=io.containerd.runc.v2\n" Apr 12 19:04:28.232118 env[1144]: time="2024-04-12T19:04:28.231919870Z" level=info msg="TearDown network for sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" successfully" Apr 12 19:04:28.232118 env[1144]: time="2024-04-12T19:04:28.231960012Z" level=info msg="StopPodSandbox for \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" returns successfully" Apr 12 19:04:28.295713 kubelet[2033]: I0412 19:04:28.295653 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-xtables-lock\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.295713 kubelet[2033]: I0412 19:04:28.295742 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml9x2\" (UniqueName: \"kubernetes.io/projected/8797858e-d6f0-44ea-b2e7-23aee40ef301-kube-api-access-ml9x2\") pod \"8797858e-d6f0-44ea-b2e7-23aee40ef301\" (UID: \"8797858e-d6f0-44ea-b2e7-23aee40ef301\") " Apr 12 19:04:28.296279 kubelet[2033]: I0412 19:04:28.295782 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9f4357e-ef50-45e5-a473-b6d66a14c187-hubble-tls\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.296279 kubelet[2033]: I0412 19:04:28.295822 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-etc-cni-netd\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.296279 kubelet[2033]: I0412 19:04:28.295853 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-lib-modules\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.296279 kubelet[2033]: I0412 19:04:28.295882 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-run\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.296279 kubelet[2033]: I0412 19:04:28.295918 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-host-proc-sys-kernel\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.296279 kubelet[2033]: I0412 19:04:28.295954 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrmzw\" (UniqueName: \"kubernetes.io/projected/b9f4357e-ef50-45e5-a473-b6d66a14c187-kube-api-access-wrmzw\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.296615 kubelet[2033]: I0412 19:04:28.295987 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-host-proc-sys-net\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.296615 kubelet[2033]: I0412 19:04:28.296023 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-cgroup\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.296615 kubelet[2033]: I0412 19:04:28.296064 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8797858e-d6f0-44ea-b2e7-23aee40ef301-cilium-config-path\") pod \"8797858e-d6f0-44ea-b2e7-23aee40ef301\" (UID: \"8797858e-d6f0-44ea-b2e7-23aee40ef301\") " Apr 12 19:04:28.296615 kubelet[2033]: I0412 19:04:28.296148 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cni-path\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.296615 kubelet[2033]: I0412 19:04:28.296190 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9f4357e-ef50-45e5-a473-b6d66a14c187-clustermesh-secrets\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.296615 kubelet[2033]: I0412 19:04:28.296238 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-bpf-maps\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.297004 kubelet[2033]: I0412 19:04:28.296271 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-hostproc\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.297004 kubelet[2033]: I0412 19:04:28.296314 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-config-path\") pod \"b9f4357e-ef50-45e5-a473-b6d66a14c187\" (UID: \"b9f4357e-ef50-45e5-a473-b6d66a14c187\") " Apr 12 19:04:28.297234 kubelet[2033]: I0412 19:04:28.297188 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:28.300260 kubelet[2033]: I0412 19:04:28.300215 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 19:04:28.300424 kubelet[2033]: I0412 19:04:28.300307 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:28.300424 kubelet[2033]: I0412 19:04:28.300340 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:28.300752 kubelet[2033]: I0412 19:04:28.300720 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cni-path" (OuterVolumeSpecName: "cni-path") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:28.301303 kubelet[2033]: I0412 19:04:28.301271 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:28.301512 kubelet[2033]: I0412 19:04:28.301471 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-hostproc" (OuterVolumeSpecName: "hostproc") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:28.304012 kubelet[2033]: I0412 19:04:28.303964 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:28.304509 kubelet[2033]: I0412 19:04:28.304481 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:28.304738 kubelet[2033]: I0412 19:04:28.304697 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:28.305500 kubelet[2033]: I0412 19:04:28.305469 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f4357e-ef50-45e5-a473-b6d66a14c187-kube-api-access-wrmzw" (OuterVolumeSpecName: "kube-api-access-wrmzw") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "kube-api-access-wrmzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 19:04:28.305720 kubelet[2033]: I0412 19:04:28.305684 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:28.308913 kubelet[2033]: I0412 19:04:28.308859 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8797858e-d6f0-44ea-b2e7-23aee40ef301-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8797858e-d6f0-44ea-b2e7-23aee40ef301" (UID: "8797858e-d6f0-44ea-b2e7-23aee40ef301"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 19:04:28.310467 kubelet[2033]: I0412 19:04:28.310435 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8797858e-d6f0-44ea-b2e7-23aee40ef301-kube-api-access-ml9x2" (OuterVolumeSpecName: "kube-api-access-ml9x2") pod "8797858e-d6f0-44ea-b2e7-23aee40ef301" (UID: "8797858e-d6f0-44ea-b2e7-23aee40ef301"). InnerVolumeSpecName "kube-api-access-ml9x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 19:04:28.314587 kubelet[2033]: I0412 19:04:28.314536 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f4357e-ef50-45e5-a473-b6d66a14c187-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 19:04:28.315420 kubelet[2033]: I0412 19:04:28.315367 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f4357e-ef50-45e5-a473-b6d66a14c187-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b9f4357e-ef50-45e5-a473-b6d66a14c187" (UID: "b9f4357e-ef50-45e5-a473-b6d66a14c187"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 19:04:28.397293 kubelet[2033]: I0412 19:04:28.397232 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-config-path\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.397293 kubelet[2033]: I0412 19:04:28.397293 2033 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-hostproc\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.397293 kubelet[2033]: I0412 19:04:28.397314 2033 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-xtables-lock\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.397739 kubelet[2033]: I0412 19:04:28.397342 2033 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ml9x2\" (UniqueName: \"kubernetes.io/projected/8797858e-d6f0-44ea-b2e7-23aee40ef301-kube-api-access-ml9x2\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.397739 kubelet[2033]: I0412 19:04:28.397360 2033 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-etc-cni-netd\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.397739 kubelet[2033]: I0412 19:04:28.397385 2033 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-lib-modules\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.397739 kubelet[2033]: I0412 19:04:28.397403 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-run\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.397739 kubelet[2033]: I0412 19:04:28.397425 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-host-proc-sys-kernel\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.397739 kubelet[2033]: I0412 19:04:28.397443 2033 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9f4357e-ef50-45e5-a473-b6d66a14c187-hubble-tls\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.397739 kubelet[2033]: I0412 19:04:28.397461 2033 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wrmzw\" (UniqueName: \"kubernetes.io/projected/b9f4357e-ef50-45e5-a473-b6d66a14c187-kube-api-access-wrmzw\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.398062 kubelet[2033]: I0412 19:04:28.397478 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-host-proc-sys-net\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.398062 kubelet[2033]: I0412 19:04:28.397499 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cilium-cgroup\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.398062 kubelet[2033]: I0412 19:04:28.397517 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8797858e-d6f0-44ea-b2e7-23aee40ef301-cilium-config-path\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.398062 kubelet[2033]: I0412 19:04:28.397539 2033 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-cni-path\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.398062 kubelet[2033]: I0412 19:04:28.397557 2033 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9f4357e-ef50-45e5-a473-b6d66a14c187-clustermesh-secrets\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.398062 kubelet[2033]: I0412 19:04:28.397574 2033 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9f4357e-ef50-45e5-a473-b6d66a14c187-bpf-maps\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:28.419778 systemd[1]: Removed slice kubepods-besteffort-pod8797858e_d6f0_44ea_b2e7_23aee40ef301.slice. Apr 12 19:04:28.426245 systemd[1]: Removed slice kubepods-burstable-podb9f4357e_ef50_45e5_a473_b6d66a14c187.slice. Apr 12 19:04:28.426433 systemd[1]: kubepods-burstable-podb9f4357e_ef50_45e5_a473_b6d66a14c187.slice: Consumed 10.186s CPU time. Apr 12 19:04:28.794130 kubelet[2033]: I0412 19:04:28.793902 2033 scope.go:117] "RemoveContainer" containerID="07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85" Apr 12 19:04:28.798383 env[1144]: time="2024-04-12T19:04:28.796868289Z" level=info msg="RemoveContainer for \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\"" Apr 12 19:04:28.805512 env[1144]: time="2024-04-12T19:04:28.805458173Z" level=info msg="RemoveContainer for \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\" returns successfully" Apr 12 19:04:28.806099 kubelet[2033]: I0412 19:04:28.806066 2033 scope.go:117] "RemoveContainer" containerID="07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85" Apr 12 19:04:28.806646 env[1144]: time="2024-04-12T19:04:28.806530255Z" level=error msg="ContainerStatus for \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\": not found" Apr 12 19:04:28.807253 kubelet[2033]: E0412 19:04:28.807216 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\": not found" containerID="07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85" Apr 12 19:04:28.807383 kubelet[2033]: I0412 19:04:28.807347 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85"} err="failed to get container status \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\": rpc error: code = NotFound desc = an error occurred when try to find container \"07e833dbe533d3b5d3561c02eaad5e2e5dffc907b49217ea67979ba357917b85\": not found" Apr 12 19:04:28.807567 kubelet[2033]: I0412 19:04:28.807525 2033 scope.go:117] "RemoveContainer" containerID="8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af" Apr 12 19:04:28.811688 env[1144]: time="2024-04-12T19:04:28.811647224Z" level=info msg="RemoveContainer for \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\"" Apr 12 19:04:28.817391 env[1144]: time="2024-04-12T19:04:28.816971235Z" level=info msg="RemoveContainer for \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\" returns successfully" Apr 12 19:04:28.817543 kubelet[2033]: I0412 19:04:28.817180 2033 scope.go:117] "RemoveContainer" containerID="1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8" Apr 12 19:04:28.822466 env[1144]: time="2024-04-12T19:04:28.821703684Z" level=info msg="RemoveContainer for \"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8\"" Apr 12 19:04:28.835515 env[1144]: time="2024-04-12T19:04:28.835452547Z" level=info msg="RemoveContainer for \"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8\" returns successfully" Apr 12 19:04:28.838881 kubelet[2033]: I0412 19:04:28.838838 2033 scope.go:117] "RemoveContainer" containerID="de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62" Apr 12 19:04:28.842261 env[1144]: time="2024-04-12T19:04:28.841290281Z" level=info msg="RemoveContainer for \"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62\"" Apr 12 19:04:28.847407 env[1144]: time="2024-04-12T19:04:28.847354515Z" level=info msg="RemoveContainer for \"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62\" returns successfully" Apr 12 19:04:28.847711 kubelet[2033]: I0412 19:04:28.847683 2033 scope.go:117] "RemoveContainer" containerID="bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b" Apr 12 19:04:28.849390 env[1144]: time="2024-04-12T19:04:28.849326372Z" level=info msg="RemoveContainer for \"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b\"" Apr 12 19:04:28.855221 env[1144]: time="2024-04-12T19:04:28.855170097Z" level=info msg="RemoveContainer for \"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b\" returns successfully" Apr 12 19:04:28.855476 kubelet[2033]: I0412 19:04:28.855438 2033 scope.go:117] "RemoveContainer" containerID="29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec" Apr 12 19:04:28.857300 env[1144]: time="2024-04-12T19:04:28.857256774Z" level=info msg="RemoveContainer for \"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec\"" Apr 12 19:04:28.869612 env[1144]: time="2024-04-12T19:04:28.869535288Z" level=info msg="RemoveContainer for \"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec\" returns successfully" Apr 12 19:04:28.869977 kubelet[2033]: I0412 19:04:28.869923 2033 scope.go:117] "RemoveContainer" containerID="8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af" Apr 12 19:04:28.870425 env[1144]: time="2024-04-12T19:04:28.870333173Z" level=error msg="ContainerStatus for \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\": not found" Apr 12 19:04:28.870726 kubelet[2033]: E0412 19:04:28.870701 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\": not found" containerID="8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af" Apr 12 19:04:28.870963 kubelet[2033]: I0412 19:04:28.870942 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af"} err="failed to get container status \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cc363694838b12af80522858aecbd8e571705e10980c8c2bf7196339f9cb1af\": not found" Apr 12 19:04:28.871107 kubelet[2033]: I0412 19:04:28.871084 2033 scope.go:117] "RemoveContainer" containerID="1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8" Apr 12 19:04:28.871531 env[1144]: time="2024-04-12T19:04:28.871410284Z" level=error msg="ContainerStatus for \"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8\": not found" Apr 12 19:04:28.871698 kubelet[2033]: E0412 19:04:28.871671 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8\": not found" containerID="1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8" Apr 12 19:04:28.871831 kubelet[2033]: I0412 19:04:28.871727 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8"} err="failed to get container status \"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"1afaa154deff86d24643552cbd973396fdea924acba5900c1eed7d2c64cde9f8\": not found" Apr 12 19:04:28.871831 kubelet[2033]: I0412 19:04:28.871753 2033 scope.go:117] "RemoveContainer" containerID="de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62" Apr 12 19:04:28.872249 env[1144]: time="2024-04-12T19:04:28.872154156Z" level=error msg="ContainerStatus for \"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62\": not found" Apr 12 19:04:28.872427 kubelet[2033]: E0412 19:04:28.872402 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62\": not found" containerID="de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62" Apr 12 19:04:28.872534 kubelet[2033]: I0412 19:04:28.872449 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62"} err="failed to get container status \"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62\": rpc error: code = NotFound desc = an error occurred when try to find container \"de7bad47da0c0cc019f7bd2f0149c8c9aab4e44affdc0a60281145511e1b9d62\": not found" Apr 12 19:04:28.872534 kubelet[2033]: I0412 19:04:28.872467 2033 scope.go:117] "RemoveContainer" containerID="bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b" Apr 12 19:04:28.872769 env[1144]: time="2024-04-12T19:04:28.872697569Z" level=error msg="ContainerStatus for \"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b\": not found" Apr 12 19:04:28.873027 kubelet[2033]: E0412 19:04:28.873002 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b\": not found" containerID="bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b" Apr 12 19:04:28.873134 kubelet[2033]: I0412 19:04:28.873049 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b"} err="failed to get container status \"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdfa62cb021aeaa591fbee319262b636bc2e998f8bf4b4f6f9a5687cd575020b\": not found" Apr 12 19:04:28.873134 kubelet[2033]: I0412 19:04:28.873075 2033 scope.go:117] "RemoveContainer" containerID="29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec" Apr 12 19:04:28.873388 env[1144]: time="2024-04-12T19:04:28.873298619Z" level=error msg="ContainerStatus for \"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec\": not found" Apr 12 19:04:28.873621 kubelet[2033]: E0412 19:04:28.873594 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec\": not found" containerID="29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec" Apr 12 19:04:28.873727 kubelet[2033]: I0412 19:04:28.873649 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec"} err="failed to get container status \"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"29e8b362fb44654b7c55b7caddc0738093379388c964d0e7eade5d822e3598ec\": not found" Apr 12 19:04:28.922687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8-rootfs.mount: Deactivated successfully. Apr 12 19:04:28.922884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699-rootfs.mount: Deactivated successfully. Apr 12 19:04:28.923003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699-shm.mount: Deactivated successfully. Apr 12 19:04:28.923122 systemd[1]: var-lib-kubelet-pods-b9f4357e\x2def50\x2d45e5\x2da473\x2db6d66a14c187-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwrmzw.mount: Deactivated successfully. Apr 12 19:04:28.923232 systemd[1]: var-lib-kubelet-pods-8797858e\x2dd6f0\x2d44ea\x2db2e7\x2d23aee40ef301-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dml9x2.mount: Deactivated successfully. Apr 12 19:04:28.923328 systemd[1]: var-lib-kubelet-pods-b9f4357e\x2def50\x2d45e5\x2da473\x2db6d66a14c187-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 19:04:28.923434 systemd[1]: var-lib-kubelet-pods-b9f4357e\x2def50\x2d45e5\x2da473\x2db6d66a14c187-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 19:04:29.885265 sshd[3575]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:29.890952 systemd-logind[1127]: Session 21 logged out. Waiting for processes to exit. Apr 12 19:04:29.891265 systemd[1]: sshd@21-10.128.0.35:22-139.178.89.65:60214.service: Deactivated successfully. Apr 12 19:04:29.892757 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 19:04:29.893413 systemd[1]: session-21.scope: Consumed 1.553s CPU time. Apr 12 19:04:29.894546 systemd-logind[1127]: Removed session 21. Apr 12 19:04:29.946260 systemd[1]: Started sshd@22-10.128.0.35:22-139.178.89.65:52758.service. Apr 12 19:04:30.299291 sshd[3738]: Accepted publickey for core from 139.178.89.65 port 52758 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:30.301054 sshd[3738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:30.308831 systemd[1]: Started session-22.scope. Apr 12 19:04:30.309784 systemd-logind[1127]: New session 22 of user core. Apr 12 19:04:30.409324 kubelet[2033]: I0412 19:04:30.409277 2033 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8797858e-d6f0-44ea-b2e7-23aee40ef301" path="/var/lib/kubelet/pods/8797858e-d6f0-44ea-b2e7-23aee40ef301/volumes" Apr 12 19:04:30.411131 kubelet[2033]: I0412 19:04:30.411090 2033 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b9f4357e-ef50-45e5-a473-b6d66a14c187" path="/var/lib/kubelet/pods/b9f4357e-ef50-45e5-a473-b6d66a14c187/volumes" Apr 12 19:04:31.212231 kubelet[2033]: I0412 19:04:31.212155 2033 topology_manager.go:215] "Topology Admit Handler" podUID="931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" podNamespace="kube-system" podName="cilium-wpzvn" Apr 12 19:04:31.212692 kubelet[2033]: E0412 19:04:31.212664 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9f4357e-ef50-45e5-a473-b6d66a14c187" containerName="apply-sysctl-overwrites" Apr 12 19:04:31.212913 kubelet[2033]: E0412 19:04:31.212894 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8797858e-d6f0-44ea-b2e7-23aee40ef301" containerName="cilium-operator" Apr 12 19:04:31.213092 kubelet[2033]: E0412 19:04:31.213073 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9f4357e-ef50-45e5-a473-b6d66a14c187" containerName="mount-bpf-fs" Apr 12 19:04:31.213241 kubelet[2033]: E0412 19:04:31.213225 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9f4357e-ef50-45e5-a473-b6d66a14c187" containerName="cilium-agent" Apr 12 19:04:31.213382 kubelet[2033]: E0412 19:04:31.213365 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9f4357e-ef50-45e5-a473-b6d66a14c187" containerName="mount-cgroup" Apr 12 19:04:31.213531 kubelet[2033]: E0412 19:04:31.213515 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9f4357e-ef50-45e5-a473-b6d66a14c187" containerName="clean-cilium-state" Apr 12 19:04:31.213709 kubelet[2033]: I0412 19:04:31.213684 2033 memory_manager.go:346] "RemoveStaleState removing state" podUID="b9f4357e-ef50-45e5-a473-b6d66a14c187" containerName="cilium-agent" Apr 12 19:04:31.213883 kubelet[2033]: I0412 19:04:31.213865 2033 memory_manager.go:346] "RemoveStaleState removing state" podUID="8797858e-d6f0-44ea-b2e7-23aee40ef301" containerName="cilium-operator" Apr 12 19:04:31.225435 systemd[1]: Created slice kubepods-burstable-pod931ef0c9_2059_4e2d_9c7b_5af1e9eb236c.slice. Apr 12 19:04:31.231740 sshd[3738]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:31.237236 kubelet[2033]: W0412 19:04:31.237203 2033 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:04:31.237331 systemd[1]: sshd@22-10.128.0.35:22-139.178.89.65:52758.service: Deactivated successfully. Apr 12 19:04:31.237626 kubelet[2033]: E0412 19:04:31.237605 2033 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:04:31.238540 kubelet[2033]: W0412 19:04:31.238512 2033 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:04:31.238681 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 19:04:31.238876 kubelet[2033]: E0412 19:04:31.238856 2033 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:04:31.238893 systemd-logind[1127]: Session 22 logged out. Waiting for processes to exit. Apr 12 19:04:31.240160 kubelet[2033]: W0412 19:04:31.239860 2033 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:04:31.240596 kubelet[2033]: E0412 19:04:31.240577 2033 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:04:31.241086 kubelet[2033]: W0412 19:04:31.241054 2033 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:04:31.241236 kubelet[2033]: E0412 19:04:31.241221 2033 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal' and this object Apr 12 19:04:31.241663 systemd-logind[1127]: Removed session 22. Apr 12 19:04:31.289271 systemd[1]: Started sshd@23-10.128.0.35:22-139.178.89.65:52774.service. Apr 12 19:04:31.320426 kubelet[2033]: I0412 19:04:31.319492 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-run\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.320426 kubelet[2033]: I0412 19:04:31.319574 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-config-path\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.320426 kubelet[2033]: I0412 19:04:31.319608 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-etc-cni-netd\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.320426 kubelet[2033]: I0412 19:04:31.319642 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cni-path\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.320426 kubelet[2033]: I0412 19:04:31.319685 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9gj\" (UniqueName: \"kubernetes.io/projected/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-kube-api-access-bj9gj\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.320426 kubelet[2033]: I0412 19:04:31.319722 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-cgroup\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.321003 kubelet[2033]: I0412 19:04:31.319757 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-ipsec-secrets\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.321003 kubelet[2033]: I0412 19:04:31.319790 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-host-proc-sys-net\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.321003 kubelet[2033]: I0412 19:04:31.319871 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-bpf-maps\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.321003 kubelet[2033]: I0412 19:04:31.319909 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-hostproc\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.321003 kubelet[2033]: I0412 19:04:31.319943 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-lib-modules\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.321003 kubelet[2033]: I0412 19:04:31.319980 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-xtables-lock\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.321299 kubelet[2033]: I0412 19:04:31.320022 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-clustermesh-secrets\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.321299 kubelet[2033]: I0412 19:04:31.320054 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-hubble-tls\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.321299 kubelet[2033]: I0412 19:04:31.320090 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-host-proc-sys-kernel\") pod \"cilium-wpzvn\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " pod="kube-system/cilium-wpzvn" Apr 12 19:04:31.642643 kubelet[2033]: E0412 19:04:31.642595 2033 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 19:04:31.651267 sshd[3749]: Accepted publickey for core from 139.178.89.65 port 52774 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:31.653723 sshd[3749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:31.661995 systemd[1]: Started session-23.scope. Apr 12 19:04:31.662909 systemd-logind[1127]: New session 23 of user core. Apr 12 19:04:31.970023 kubelet[2033]: E0412 19:04:31.969825 2033 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-wpzvn" podUID="931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" Apr 12 19:04:32.001724 sshd[3749]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:32.007311 systemd-logind[1127]: Session 23 logged out. Waiting for processes to exit. Apr 12 19:04:32.009351 systemd[1]: sshd@23-10.128.0.35:22-139.178.89.65:52774.service: Deactivated successfully. Apr 12 19:04:32.010734 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 19:04:32.012985 systemd-logind[1127]: Removed session 23. Apr 12 19:04:32.058548 systemd[1]: Started sshd@24-10.128.0.35:22-139.178.89.65:52776.service. Apr 12 19:04:32.409451 sshd[3762]: Accepted publickey for core from 139.178.89.65 port 52776 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 19:04:32.412049 sshd[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 19:04:32.420710 systemd[1]: Started session-24.scope. Apr 12 19:04:32.421430 systemd-logind[1127]: New session 24 of user core. Apr 12 19:04:32.423215 kubelet[2033]: E0412 19:04:32.423176 2033 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 12 19:04:32.423377 kubelet[2033]: E0412 19:04:32.423299 2033 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-config-path podName:931ef0c9-2059-4e2d-9c7b-5af1e9eb236c nodeName:}" failed. No retries permitted until 2024-04-12 19:04:32.923266414 +0000 UTC m=+116.803101890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-config-path") pod "cilium-wpzvn" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c") : failed to sync configmap cache: timed out waiting for the condition Apr 12 19:04:32.424010 kubelet[2033]: E0412 19:04:32.423681 2033 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Apr 12 19:04:32.424010 kubelet[2033]: E0412 19:04:32.423768 2033 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-clustermesh-secrets podName:931ef0c9-2059-4e2d-9c7b-5af1e9eb236c nodeName:}" failed. No retries permitted until 2024-04-12 19:04:32.923748454 +0000 UTC m=+116.803583897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-clustermesh-secrets") pod "cilium-wpzvn" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c") : failed to sync secret cache: timed out waiting for the condition Apr 12 19:04:32.934115 kubelet[2033]: I0412 19:04:32.934049 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-host-proc-sys-kernel\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.935007 kubelet[2033]: I0412 19:04:32.934642 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:32.935150 kubelet[2033]: I0412 19:04:32.935122 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:32.935281 kubelet[2033]: I0412 19:04:32.934991 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-run\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.935482 kubelet[2033]: I0412 19:04:32.935448 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj9gj\" (UniqueName: \"kubernetes.io/projected/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-kube-api-access-bj9gj\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.935660 kubelet[2033]: I0412 19:04:32.935641 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-cgroup\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.935864 kubelet[2033]: I0412 19:04:32.935847 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-bpf-maps\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.936073 kubelet[2033]: I0412 19:04:32.936053 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-hubble-tls\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.936247 kubelet[2033]: I0412 19:04:32.936231 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cni-path\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.936426 kubelet[2033]: I0412 19:04:32.936409 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-xtables-lock\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.936599 kubelet[2033]: I0412 19:04:32.936584 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-host-proc-sys-net\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.936761 kubelet[2033]: I0412 19:04:32.936745 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-hostproc\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.936945 kubelet[2033]: I0412 19:04:32.936928 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-etc-cni-netd\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.937139 kubelet[2033]: I0412 19:04:32.937110 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-ipsec-secrets\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.937304 kubelet[2033]: I0412 19:04:32.937287 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-lib-modules\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:32.937647 kubelet[2033]: I0412 19:04:32.937624 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-host-proc-sys-kernel\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:32.937840 kubelet[2033]: I0412 19:04:32.937794 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-run\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:32.941201 kubelet[2033]: I0412 19:04:32.941143 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:32.941323 kubelet[2033]: I0412 19:04:32.941216 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:32.941323 kubelet[2033]: I0412 19:04:32.936583 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:32.941323 kubelet[2033]: I0412 19:04:32.936607 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:32.941323 kubelet[2033]: I0412 19:04:32.936633 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cni-path" (OuterVolumeSpecName: "cni-path") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:32.941323 kubelet[2033]: I0412 19:04:32.941275 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-hostproc" (OuterVolumeSpecName: "hostproc") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:32.941617 kubelet[2033]: I0412 19:04:32.941322 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:32.948236 systemd[1]: var-lib-kubelet-pods-931ef0c9\x2d2059\x2d4e2d\x2d9c7b\x2d5af1e9eb236c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbj9gj.mount: Deactivated successfully. Apr 12 19:04:32.954197 systemd[1]: var-lib-kubelet-pods-931ef0c9\x2d2059\x2d4e2d\x2d9c7b\x2d5af1e9eb236c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 19:04:32.954382 systemd[1]: var-lib-kubelet-pods-931ef0c9\x2d2059\x2d4e2d\x2d9c7b\x2d5af1e9eb236c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 19:04:32.956006 kubelet[2033]: I0412 19:04:32.955952 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 19:04:32.961446 kubelet[2033]: I0412 19:04:32.961407 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 19:04:32.962450 kubelet[2033]: I0412 19:04:32.962411 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-kube-api-access-bj9gj" (OuterVolumeSpecName: "kube-api-access-bj9gj") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "kube-api-access-bj9gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 19:04:32.966533 kubelet[2033]: I0412 19:04:32.966493 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 19:04:33.039246 kubelet[2033]: I0412 19:04:33.039161 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-config-path\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:33.039246 kubelet[2033]: I0412 19:04:33.039248 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-clustermesh-secrets\") pod \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\" (UID: \"931ef0c9-2059-4e2d-9c7b-5af1e9eb236c\") " Apr 12 19:04:33.039645 kubelet[2033]: I0412 19:04:33.039328 2033 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bj9gj\" (UniqueName: \"kubernetes.io/projected/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-kube-api-access-bj9gj\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.039645 kubelet[2033]: I0412 19:04:33.039349 2033 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-hubble-tls\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.039645 kubelet[2033]: I0412 19:04:33.039369 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-cgroup\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.039645 kubelet[2033]: I0412 19:04:33.039413 2033 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-bpf-maps\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.039645 kubelet[2033]: I0412 19:04:33.039434 2033 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cni-path\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.039645 kubelet[2033]: I0412 19:04:33.039451 2033 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-xtables-lock\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.039645 kubelet[2033]: I0412 19:04:33.039471 2033 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-etc-cni-netd\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.040097 kubelet[2033]: I0412 19:04:33.039495 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-ipsec-secrets\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.040097 kubelet[2033]: I0412 19:04:33.039514 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-host-proc-sys-net\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.040097 kubelet[2033]: I0412 19:04:33.039532 2033 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-hostproc\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.040097 kubelet[2033]: I0412 19:04:33.039588 2033 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-lib-modules\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.043598 kubelet[2033]: I0412 19:04:33.043544 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 19:04:33.047463 systemd[1]: var-lib-kubelet-pods-931ef0c9\x2d2059\x2d4e2d\x2d9c7b\x2d5af1e9eb236c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 19:04:33.048248 kubelet[2033]: I0412 19:04:33.048208 2033 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" (UID: "931ef0c9-2059-4e2d-9c7b-5af1e9eb236c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 19:04:33.140162 kubelet[2033]: I0412 19:04:33.140093 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-cilium-config-path\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.140162 kubelet[2033]: I0412 19:04:33.140149 2033 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c-clustermesh-secrets\") on node \"ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 19:04:33.831377 systemd[1]: Removed slice kubepods-burstable-pod931ef0c9_2059_4e2d_9c7b_5af1e9eb236c.slice. Apr 12 19:04:33.878834 kubelet[2033]: I0412 19:04:33.878756 2033 topology_manager.go:215] "Topology Admit Handler" podUID="3e956a2a-9a24-48dd-96f1-cc3bd80841ee" podNamespace="kube-system" podName="cilium-bj6vg" Apr 12 19:04:33.890374 systemd[1]: Created slice kubepods-burstable-pod3e956a2a_9a24_48dd_96f1_cc3bd80841ee.slice. Apr 12 19:04:33.945122 kubelet[2033]: I0412 19:04:33.945050 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-cilium-cgroup\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.946075 kubelet[2033]: I0412 19:04:33.946046 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-xtables-lock\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.946401 kubelet[2033]: I0412 19:04:33.946368 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-clustermesh-secrets\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.946686 kubelet[2033]: I0412 19:04:33.946666 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-cilium-ipsec-secrets\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.946987 kubelet[2033]: I0412 19:04:33.946968 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-bpf-maps\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.947235 kubelet[2033]: I0412 19:04:33.947215 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-hostproc\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.947465 kubelet[2033]: I0412 19:04:33.947444 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfs22\" (UniqueName: \"kubernetes.io/projected/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-kube-api-access-nfs22\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.947669 kubelet[2033]: I0412 19:04:33.947653 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-cni-path\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.947901 kubelet[2033]: I0412 19:04:33.947884 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-cilium-config-path\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.948133 kubelet[2033]: I0412 19:04:33.948117 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-etc-cni-netd\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.948316 kubelet[2033]: I0412 19:04:33.948301 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-hubble-tls\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.948505 kubelet[2033]: I0412 19:04:33.948491 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-host-proc-sys-kernel\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.948671 kubelet[2033]: I0412 19:04:33.948657 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-cilium-run\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.948857 kubelet[2033]: I0412 19:04:33.948842 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-lib-modules\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:33.949068 kubelet[2033]: I0412 19:04:33.949049 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e956a2a-9a24-48dd-96f1-cc3bd80841ee-host-proc-sys-net\") pod \"cilium-bj6vg\" (UID: \"3e956a2a-9a24-48dd-96f1-cc3bd80841ee\") " pod="kube-system/cilium-bj6vg" Apr 12 19:04:34.209091 env[1144]: time="2024-04-12T19:04:34.208926317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bj6vg,Uid:3e956a2a-9a24-48dd-96f1-cc3bd80841ee,Namespace:kube-system,Attempt:0,}" Apr 12 19:04:34.243215 env[1144]: time="2024-04-12T19:04:34.243075282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 19:04:34.243215 env[1144]: time="2024-04-12T19:04:34.243151581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 19:04:34.243646 env[1144]: time="2024-04-12T19:04:34.243172372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 19:04:34.244253 env[1144]: time="2024-04-12T19:04:34.243878416Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be pid=3792 runtime=io.containerd.runc.v2 Apr 12 19:04:34.276519 systemd[1]: Started cri-containerd-cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be.scope. Apr 12 19:04:34.327857 env[1144]: time="2024-04-12T19:04:34.327772466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bj6vg,Uid:3e956a2a-9a24-48dd-96f1-cc3bd80841ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be\"" Apr 12 19:04:34.335993 env[1144]: time="2024-04-12T19:04:34.335931390Z" level=info msg="CreateContainer within sandbox \"cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 19:04:34.354139 env[1144]: time="2024-04-12T19:04:34.354064008Z" level=info msg="CreateContainer within sandbox \"cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ef3214c95fa7555099aa557a3316dc22251cbdf02fe0907d736f284e0fcbeaf7\"" Apr 12 19:04:34.356999 env[1144]: time="2024-04-12T19:04:34.355086766Z" level=info msg="StartContainer for \"ef3214c95fa7555099aa557a3316dc22251cbdf02fe0907d736f284e0fcbeaf7\"" Apr 12 19:04:34.381847 systemd[1]: Started cri-containerd-ef3214c95fa7555099aa557a3316dc22251cbdf02fe0907d736f284e0fcbeaf7.scope. Apr 12 19:04:34.411026 kubelet[2033]: I0412 19:04:34.410964 2033 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="931ef0c9-2059-4e2d-9c7b-5af1e9eb236c" path="/var/lib/kubelet/pods/931ef0c9-2059-4e2d-9c7b-5af1e9eb236c/volumes" Apr 12 19:04:34.432908 env[1144]: time="2024-04-12T19:04:34.432791659Z" level=info msg="StartContainer for \"ef3214c95fa7555099aa557a3316dc22251cbdf02fe0907d736f284e0fcbeaf7\" returns successfully" Apr 12 19:04:34.445274 systemd[1]: cri-containerd-ef3214c95fa7555099aa557a3316dc22251cbdf02fe0907d736f284e0fcbeaf7.scope: Deactivated successfully. Apr 12 19:04:34.488761 env[1144]: time="2024-04-12T19:04:34.487758176Z" level=info msg="shim disconnected" id=ef3214c95fa7555099aa557a3316dc22251cbdf02fe0907d736f284e0fcbeaf7 Apr 12 19:04:34.488761 env[1144]: time="2024-04-12T19:04:34.487921490Z" level=warning msg="cleaning up after shim disconnected" id=ef3214c95fa7555099aa557a3316dc22251cbdf02fe0907d736f284e0fcbeaf7 namespace=k8s.io Apr 12 19:04:34.488761 env[1144]: time="2024-04-12T19:04:34.487959265Z" level=info msg="cleaning up dead shim" Apr 12 19:04:34.503601 env[1144]: time="2024-04-12T19:04:34.503511803Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:04:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3878 runtime=io.containerd.runc.v2\n" Apr 12 19:04:34.835389 env[1144]: time="2024-04-12T19:04:34.835197644Z" level=info msg="CreateContainer within sandbox \"cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 19:04:34.856200 env[1144]: time="2024-04-12T19:04:34.856112967Z" level=info msg="CreateContainer within sandbox \"cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0628a4753c49dc1fad39e8c249073d630900d0200b8d606cf48d3a0c72d8f529\"" Apr 12 19:04:34.857304 env[1144]: time="2024-04-12T19:04:34.857218083Z" level=info msg="StartContainer for \"0628a4753c49dc1fad39e8c249073d630900d0200b8d606cf48d3a0c72d8f529\"" Apr 12 19:04:34.905374 systemd[1]: Started cri-containerd-0628a4753c49dc1fad39e8c249073d630900d0200b8d606cf48d3a0c72d8f529.scope. Apr 12 19:04:34.993635 env[1144]: time="2024-04-12T19:04:34.993564324Z" level=info msg="StartContainer for \"0628a4753c49dc1fad39e8c249073d630900d0200b8d606cf48d3a0c72d8f529\" returns successfully" Apr 12 19:04:34.997826 systemd[1]: cri-containerd-0628a4753c49dc1fad39e8c249073d630900d0200b8d606cf48d3a0c72d8f529.scope: Deactivated successfully. Apr 12 19:04:35.035406 env[1144]: time="2024-04-12T19:04:35.035306736Z" level=info msg="shim disconnected" id=0628a4753c49dc1fad39e8c249073d630900d0200b8d606cf48d3a0c72d8f529 Apr 12 19:04:35.035406 env[1144]: time="2024-04-12T19:04:35.035393895Z" level=warning msg="cleaning up after shim disconnected" id=0628a4753c49dc1fad39e8c249073d630900d0200b8d606cf48d3a0c72d8f529 namespace=k8s.io Apr 12 19:04:35.035406 env[1144]: time="2024-04-12T19:04:35.035413329Z" level=info msg="cleaning up dead shim" Apr 12 19:04:35.052645 env[1144]: time="2024-04-12T19:04:35.052569057Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:04:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3941 runtime=io.containerd.runc.v2\n" Apr 12 19:04:35.843012 env[1144]: time="2024-04-12T19:04:35.841029109Z" level=info msg="CreateContainer within sandbox \"cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 19:04:35.876280 env[1144]: time="2024-04-12T19:04:35.876197722Z" level=info msg="CreateContainer within sandbox \"cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4ab935ba316faac29991a22a628f03ddc3dc1e4b1426bee55aaf753edd093cd0\"" Apr 12 19:04:35.877871 env[1144]: time="2024-04-12T19:04:35.877821262Z" level=info msg="StartContainer for \"4ab935ba316faac29991a22a628f03ddc3dc1e4b1426bee55aaf753edd093cd0\"" Apr 12 19:04:35.910050 systemd[1]: Started cri-containerd-4ab935ba316faac29991a22a628f03ddc3dc1e4b1426bee55aaf753edd093cd0.scope. Apr 12 19:04:35.988838 env[1144]: time="2024-04-12T19:04:35.986670040Z" level=info msg="StartContainer for \"4ab935ba316faac29991a22a628f03ddc3dc1e4b1426bee55aaf753edd093cd0\" returns successfully" Apr 12 19:04:35.994139 systemd[1]: cri-containerd-4ab935ba316faac29991a22a628f03ddc3dc1e4b1426bee55aaf753edd093cd0.scope: Deactivated successfully. Apr 12 19:04:36.029743 env[1144]: time="2024-04-12T19:04:36.029663594Z" level=info msg="shim disconnected" id=4ab935ba316faac29991a22a628f03ddc3dc1e4b1426bee55aaf753edd093cd0 Apr 12 19:04:36.029743 env[1144]: time="2024-04-12T19:04:36.029745610Z" level=warning msg="cleaning up after shim disconnected" id=4ab935ba316faac29991a22a628f03ddc3dc1e4b1426bee55aaf753edd093cd0 namespace=k8s.io Apr 12 19:04:36.030309 env[1144]: time="2024-04-12T19:04:36.029762813Z" level=info msg="cleaning up dead shim" Apr 12 19:04:36.044221 env[1144]: time="2024-04-12T19:04:36.044121986Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:04:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4000 runtime=io.containerd.runc.v2\n" Apr 12 19:04:36.062259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ab935ba316faac29991a22a628f03ddc3dc1e4b1426bee55aaf753edd093cd0-rootfs.mount: Deactivated successfully. Apr 12 19:04:36.377916 env[1144]: time="2024-04-12T19:04:36.377835429Z" level=info msg="StopPodSandbox for \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\"" Apr 12 19:04:36.378225 env[1144]: time="2024-04-12T19:04:36.378002380Z" level=info msg="TearDown network for sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" successfully" Apr 12 19:04:36.378225 env[1144]: time="2024-04-12T19:04:36.378056463Z" level=info msg="StopPodSandbox for \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" returns successfully" Apr 12 19:04:36.378704 env[1144]: time="2024-04-12T19:04:36.378660992Z" level=info msg="RemovePodSandbox for \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\"" Apr 12 19:04:36.378943 env[1144]: time="2024-04-12T19:04:36.378858940Z" level=info msg="Forcibly stopping sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\"" Apr 12 19:04:36.379079 env[1144]: time="2024-04-12T19:04:36.379028999Z" level=info msg="TearDown network for sandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" successfully" Apr 12 19:04:36.384951 env[1144]: time="2024-04-12T19:04:36.384895944Z" level=info msg="RemovePodSandbox \"c5249c35ac7be4eb361073e27aa89d07fd65d0543af62384dcc77f095b22b699\" returns successfully" Apr 12 19:04:36.385506 env[1144]: time="2024-04-12T19:04:36.385464245Z" level=info msg="StopPodSandbox for \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\"" Apr 12 19:04:36.385652 env[1144]: time="2024-04-12T19:04:36.385590116Z" level=info msg="TearDown network for sandbox \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\" successfully" Apr 12 19:04:36.385746 env[1144]: time="2024-04-12T19:04:36.385653319Z" level=info msg="StopPodSandbox for \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\" returns successfully" Apr 12 19:04:36.386216 env[1144]: time="2024-04-12T19:04:36.386167638Z" level=info msg="RemovePodSandbox for \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\"" Apr 12 19:04:36.386340 env[1144]: time="2024-04-12T19:04:36.386216589Z" level=info msg="Forcibly stopping sandbox \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\"" Apr 12 19:04:36.386415 env[1144]: time="2024-04-12T19:04:36.386332836Z" level=info msg="TearDown network for sandbox \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\" successfully" Apr 12 19:04:36.390955 env[1144]: time="2024-04-12T19:04:36.390895718Z" level=info msg="RemovePodSandbox \"d6455881a82af8f402ce4492f67c3347e5d34db725defabf4df659e67cd3a9a8\" returns successfully" Apr 12 19:04:36.643943 kubelet[2033]: E0412 19:04:36.643751 2033 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 19:04:36.846744 env[1144]: time="2024-04-12T19:04:36.846668416Z" level=info msg="CreateContainer within sandbox \"cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 19:04:36.874121 env[1144]: time="2024-04-12T19:04:36.874042560Z" level=info msg="CreateContainer within sandbox \"cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"442d603115ce9409e669e239d1d9ffc7f392d5ab251c6ee8fb667f2e981f65dd\"" Apr 12 19:04:36.875247 env[1144]: time="2024-04-12T19:04:36.875200091Z" level=info msg="StartContainer for \"442d603115ce9409e669e239d1d9ffc7f392d5ab251c6ee8fb667f2e981f65dd\"" Apr 12 19:04:36.919571 systemd[1]: Started cri-containerd-442d603115ce9409e669e239d1d9ffc7f392d5ab251c6ee8fb667f2e981f65dd.scope. Apr 12 19:04:36.973110 systemd[1]: cri-containerd-442d603115ce9409e669e239d1d9ffc7f392d5ab251c6ee8fb667f2e981f65dd.scope: Deactivated successfully. Apr 12 19:04:36.978342 env[1144]: time="2024-04-12T19:04:36.978202729Z" level=info msg="StartContainer for \"442d603115ce9409e669e239d1d9ffc7f392d5ab251c6ee8fb667f2e981f65dd\" returns successfully" Apr 12 19:04:37.014451 env[1144]: time="2024-04-12T19:04:37.014372380Z" level=info msg="shim disconnected" id=442d603115ce9409e669e239d1d9ffc7f392d5ab251c6ee8fb667f2e981f65dd Apr 12 19:04:37.014451 env[1144]: time="2024-04-12T19:04:37.014455539Z" level=warning msg="cleaning up after shim disconnected" id=442d603115ce9409e669e239d1d9ffc7f392d5ab251c6ee8fb667f2e981f65dd namespace=k8s.io Apr 12 19:04:37.014956 env[1144]: time="2024-04-12T19:04:37.014472082Z" level=info msg="cleaning up dead shim" Apr 12 19:04:37.028439 env[1144]: time="2024-04-12T19:04:37.028366736Z" level=warning msg="cleanup warnings time=\"2024-04-12T19:04:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4055 runtime=io.containerd.runc.v2\n" Apr 12 19:04:37.062893 systemd[1]: run-containerd-runc-k8s.io-442d603115ce9409e669e239d1d9ffc7f392d5ab251c6ee8fb667f2e981f65dd-runc.WQcdsM.mount: Deactivated successfully. Apr 12 19:04:37.063091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-442d603115ce9409e669e239d1d9ffc7f392d5ab251c6ee8fb667f2e981f65dd-rootfs.mount: Deactivated successfully. Apr 12 19:04:37.856360 env[1144]: time="2024-04-12T19:04:37.855323570Z" level=info msg="CreateContainer within sandbox \"cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 19:04:37.879333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2421770566.mount: Deactivated successfully. Apr 12 19:04:37.890749 env[1144]: time="2024-04-12T19:04:37.890666967Z" level=info msg="CreateContainer within sandbox \"cb66cdf609d761980ff9107b040abfe241ca5bc9e9647b309fb27348fa07a8be\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4de1bbd452fc9eaff43934d8f28201c6a6ef7864ef214690ca8cb9e4da6d721d\"" Apr 12 19:04:37.892032 env[1144]: time="2024-04-12T19:04:37.891979856Z" level=info msg="StartContainer for \"4de1bbd452fc9eaff43934d8f28201c6a6ef7864ef214690ca8cb9e4da6d721d\"" Apr 12 19:04:37.897628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551297065.mount: Deactivated successfully. Apr 12 19:04:37.926913 systemd[1]: Started cri-containerd-4de1bbd452fc9eaff43934d8f28201c6a6ef7864ef214690ca8cb9e4da6d721d.scope. Apr 12 19:04:37.994288 env[1144]: time="2024-04-12T19:04:37.994198812Z" level=info msg="StartContainer for \"4de1bbd452fc9eaff43934d8f28201c6a6ef7864ef214690ca8cb9e4da6d721d\" returns successfully" Apr 12 19:04:38.635925 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 12 19:04:38.888572 kubelet[2033]: I0412 19:04:38.888399 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bj6vg" podStartSLOduration=5.888340031 podCreationTimestamp="2024-04-12 19:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 19:04:38.885836853 +0000 UTC m=+122.765672304" watchObservedRunningTime="2024-04-12 19:04:38.888340031 +0000 UTC m=+122.768175482" Apr 12 19:04:39.392557 kubelet[2033]: I0412 19:04:39.392519 2033 setters.go:552] "Node became not ready" node="ci-3510-3-3-e31b58c8b5fe342e6ce7.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-12T19:04:39Z","lastTransitionTime":"2024-04-12T19:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 12 19:04:41.134994 systemd[1]: run-containerd-runc-k8s.io-4de1bbd452fc9eaff43934d8f28201c6a6ef7864ef214690ca8cb9e4da6d721d-runc.tvCPIw.mount: Deactivated successfully. Apr 12 19:04:41.881696 systemd-networkd[1023]: lxc_health: Link UP Apr 12 19:04:41.909838 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 19:04:41.911129 systemd-networkd[1023]: lxc_health: Gained carrier Apr 12 19:04:43.129667 systemd-networkd[1023]: lxc_health: Gained IPv6LL Apr 12 19:04:43.389943 systemd[1]: run-containerd-runc-k8s.io-4de1bbd452fc9eaff43934d8f28201c6a6ef7864ef214690ca8cb9e4da6d721d-runc.VajFjY.mount: Deactivated successfully. Apr 12 19:04:45.654105 systemd[1]: run-containerd-runc-k8s.io-4de1bbd452fc9eaff43934d8f28201c6a6ef7864ef214690ca8cb9e4da6d721d-runc.54jHYe.mount: Deactivated successfully. Apr 12 19:04:47.928616 systemd[1]: run-containerd-runc-k8s.io-4de1bbd452fc9eaff43934d8f28201c6a6ef7864ef214690ca8cb9e4da6d721d-runc.h04E6v.mount: Deactivated successfully. Apr 12 19:04:48.094013 sshd[3762]: pam_unix(sshd:session): session closed for user core Apr 12 19:04:48.100273 systemd-logind[1127]: Session 24 logged out. Waiting for processes to exit. Apr 12 19:04:48.102679 systemd[1]: sshd@24-10.128.0.35:22-139.178.89.65:52776.service: Deactivated successfully. Apr 12 19:04:48.103989 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 19:04:48.106242 systemd-logind[1127]: Removed session 24.