Jul 2 00:43:17.748873 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 00:43:17.748896 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 1 23:37:37 -00 2024 Jul 2 00:43:17.748903 kernel: efi: EFI v2.70 by EDK II Jul 2 00:43:17.748910 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 2 00:43:17.748915 kernel: random: crng init done Jul 2 00:43:17.748921 kernel: ACPI: Early table checksum verification disabled Jul 2 00:43:17.748939 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 2 00:43:17.748949 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 00:43:17.748955 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:43:17.748960 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:43:17.748966 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:43:17.748971 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:43:17.748977 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:43:17.748982 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:43:17.748991 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:43:17.748997 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:43:17.749010 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:43:17.749016 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 00:43:17.749022 kernel: NUMA: Failed to initialise from firmware Jul 2 00:43:17.749028 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:43:17.749033 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 2 00:43:17.749039 kernel: Zone ranges: Jul 2 00:43:17.749045 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:43:17.749052 kernel: DMA32 empty Jul 2 00:43:17.749057 kernel: Normal empty Jul 2 00:43:17.749062 kernel: Movable zone start for each node Jul 2 00:43:17.749068 kernel: Early memory node ranges Jul 2 00:43:17.749077 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 2 00:43:17.749083 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 2 00:43:17.749089 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 2 00:43:17.749095 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 2 00:43:17.749101 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 2 00:43:17.749106 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 2 00:43:17.749112 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 2 00:43:17.749117 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:43:17.749124 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 00:43:17.749130 kernel: psci: probing for conduit method from ACPI. Jul 2 00:43:17.749136 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 00:43:17.749142 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:43:17.749148 kernel: psci: Trusted OS migration not required Jul 2 00:43:17.749156 kernel: psci: SMC Calling Convention v1.1 Jul 2 00:43:17.749162 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 00:43:17.749169 kernel: ACPI: SRAT not present Jul 2 00:43:17.749175 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Jul 2 00:43:17.749181 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Jul 2 00:43:17.749187 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 00:43:17.749193 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:43:17.749199 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:43:17.749205 kernel: CPU features: detected: Hardware dirty bit management Jul 2 00:43:17.749211 kernel: CPU features: detected: Spectre-v4 Jul 2 00:43:17.749217 kernel: CPU features: detected: Spectre-BHB Jul 2 00:43:17.749224 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 00:43:17.749230 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 00:43:17.749236 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 00:43:17.749242 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 00:43:17.749248 kernel: Policy zone: DMA Jul 2 00:43:17.749255 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:43:17.749261 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:43:17.749267 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:43:17.749274 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:43:17.749318 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:43:17.749325 kernel: Memory: 2457468K/2572288K available (9792K kernel code, 2092K rwdata, 7572K rodata, 36352K init, 777K bss, 114820K reserved, 0K cma-reserved) Jul 2 00:43:17.749334 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:43:17.749340 kernel: trace event string verifier disabled Jul 2 00:43:17.749349 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:43:17.749356 kernel: rcu: RCU event tracing is enabled. Jul 2 00:43:17.749362 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:43:17.749369 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:43:17.749375 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:43:17.749381 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:43:17.749387 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:43:17.749393 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:43:17.749399 kernel: GICv3: 256 SPIs implemented Jul 2 00:43:17.749406 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:43:17.749412 kernel: GICv3: Distributor has no Range Selector support Jul 2 00:43:17.749425 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:43:17.749431 kernel: GICv3: 16 PPIs implemented Jul 2 00:43:17.749437 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 00:43:17.749443 kernel: ACPI: SRAT not present Jul 2 00:43:17.749449 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 00:43:17.749455 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 00:43:17.749463 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 2 00:43:17.749470 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 2 00:43:17.749476 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 2 00:43:17.749482 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:43:17.749489 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 00:43:17.749495 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 00:43:17.749501 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 00:43:17.749507 kernel: arm-pv: using stolen time PV Jul 2 00:43:17.749514 kernel: Console: colour dummy device 80x25 Jul 2 00:43:17.749520 kernel: ACPI: Core revision 20210730 Jul 2 00:43:17.749526 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 00:43:17.749533 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:43:17.749539 kernel: LSM: Security Framework initializing Jul 2 00:43:17.749545 kernel: SELinux: Initializing. Jul 2 00:43:17.749552 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:43:17.749558 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:43:17.749565 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:43:17.749571 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 00:43:17.749577 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 00:43:17.749583 kernel: Remapping and enabling EFI services. Jul 2 00:43:17.749589 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:43:17.749595 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:43:17.749602 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 00:43:17.749610 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 2 00:43:17.749616 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:43:17.749622 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 00:43:17.749628 kernel: Detected PIPT I-cache on CPU2 Jul 2 00:43:17.749635 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 00:43:17.749641 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 2 00:43:17.749648 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:43:17.749654 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 00:43:17.749660 kernel: Detected PIPT I-cache on CPU3 Jul 2 00:43:17.749666 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 00:43:17.749673 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 2 00:43:17.749679 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:43:17.749685 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 00:43:17.749691 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:43:17.749702 kernel: SMP: Total of 4 processors activated. Jul 2 00:43:17.749710 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:43:17.749717 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 00:43:17.749729 kernel: CPU features: detected: Common not Private translations Jul 2 00:43:17.749736 kernel: CPU features: detected: CRC32 instructions Jul 2 00:43:17.749742 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 00:43:17.749749 kernel: CPU features: detected: LSE atomic instructions Jul 2 00:43:17.749756 kernel: CPU features: detected: Privileged Access Never Jul 2 00:43:17.749763 kernel: CPU features: detected: RAS Extension Support Jul 2 00:43:17.749770 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 00:43:17.749776 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:43:17.749782 kernel: alternatives: patching kernel code Jul 2 00:43:17.749790 kernel: devtmpfs: initialized Jul 2 00:43:17.749797 kernel: KASLR enabled Jul 2 00:43:17.749803 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:43:17.749810 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:43:17.749816 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:43:17.749825 kernel: SMBIOS 3.0.0 present. Jul 2 00:43:17.749832 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 2 00:43:17.749838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:43:17.749845 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:43:17.749852 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:43:17.749860 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:43:17.749866 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:43:17.749873 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Jul 2 00:43:17.749879 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:43:17.749886 kernel: cpuidle: using governor menu Jul 2 00:43:17.749892 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:43:17.749898 kernel: ASID allocator initialised with 32768 entries Jul 2 00:43:17.749905 kernel: ACPI: bus type PCI registered Jul 2 00:43:17.749911 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:43:17.749919 kernel: Serial: AMBA PL011 UART driver Jul 2 00:43:17.749932 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:43:17.749939 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:43:17.749945 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:43:17.749952 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:43:17.749958 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:43:17.749965 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:43:17.749971 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:43:17.749978 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:43:17.749985 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:43:17.749992 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:43:17.749998 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 00:43:17.750005 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 00:43:17.750011 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 00:43:17.750018 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:43:17.750024 kernel: ACPI: Interpreter enabled Jul 2 00:43:17.750030 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:43:17.750037 kernel: ACPI: MCFG table detected, 1 entries Jul 2 00:43:17.750044 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 00:43:17.750051 kernel: printk: console [ttyAMA0] enabled Jul 2 00:43:17.750057 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:43:17.750199 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:43:17.750267 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 00:43:17.750327 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 00:43:17.750386 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 00:43:17.750448 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 00:43:17.750457 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 00:43:17.750472 kernel: PCI host bridge to bus 0000:00 Jul 2 00:43:17.750549 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 00:43:17.750607 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 00:43:17.750661 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 00:43:17.750717 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:43:17.750804 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 00:43:17.750877 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:43:17.750953 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 00:43:17.751016 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 00:43:17.751083 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:43:17.751146 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:43:17.751238 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 00:43:17.751314 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 00:43:17.751370 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 00:43:17.751425 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 00:43:17.751480 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 00:43:17.751489 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 00:43:17.751496 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 00:43:17.751502 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 00:43:17.751511 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 00:43:17.751521 kernel: iommu: Default domain type: Translated Jul 2 00:43:17.751528 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:43:17.751535 kernel: vgaarb: loaded Jul 2 00:43:17.751541 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:43:17.751548 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:43:17.751561 kernel: PTP clock support registered Jul 2 00:43:17.751567 kernel: Registered efivars operations Jul 2 00:43:17.751574 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:43:17.751581 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:43:17.751589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:43:17.751596 kernel: pnp: PnP ACPI init Jul 2 00:43:17.751663 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 00:43:17.751672 kernel: pnp: PnP ACPI: found 1 devices Jul 2 00:43:17.751679 kernel: NET: Registered PF_INET protocol family Jul 2 00:43:17.751686 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:43:17.751693 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:43:17.751700 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:43:17.751708 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:43:17.751715 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 00:43:17.751728 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:43:17.751735 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:43:17.751743 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:43:17.751752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:43:17.751758 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:43:17.751765 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 00:43:17.751774 kernel: kvm [1]: HYP mode not available Jul 2 00:43:17.751780 kernel: Initialise system trusted keyrings Jul 2 00:43:17.751787 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:43:17.751794 kernel: Key type asymmetric registered Jul 2 00:43:17.751800 kernel: Asymmetric key parser 'x509' registered Jul 2 00:43:17.751807 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 00:43:17.751813 kernel: io scheduler mq-deadline registered Jul 2 00:43:17.751823 kernel: io scheduler kyber registered Jul 2 00:43:17.751833 kernel: io scheduler bfq registered Jul 2 00:43:17.751839 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 00:43:17.751848 kernel: ACPI: button: Power Button [PWRB] Jul 2 00:43:17.751855 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 00:43:17.751969 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 00:43:17.751981 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:43:17.751988 kernel: thunder_xcv, ver 1.0 Jul 2 00:43:17.751994 kernel: thunder_bgx, ver 1.0 Jul 2 00:43:17.752001 kernel: nicpf, ver 1.0 Jul 2 00:43:17.752008 kernel: nicvf, ver 1.0 Jul 2 00:43:17.752083 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:43:17.752146 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:43:17 UTC (1719880997) Jul 2 00:43:17.752155 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:43:17.752161 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:43:17.752168 kernel: Segment Routing with IPv6 Jul 2 00:43:17.752175 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:43:17.752181 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:43:17.752188 kernel: Key type dns_resolver registered Jul 2 00:43:17.752195 kernel: registered taskstats version 1 Jul 2 00:43:17.752203 kernel: Loading compiled-in X.509 certificates Jul 2 00:43:17.752210 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: c418313b450e4055b23e41c11cb6dc415de0265d' Jul 2 00:43:17.752216 kernel: Key type .fscrypt registered Jul 2 00:43:17.752761 kernel: Key type fscrypt-provisioning registered Jul 2 00:43:17.752782 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:43:17.752789 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:43:17.752796 kernel: ima: No architecture policies found Jul 2 00:43:17.752802 kernel: clk: Disabling unused clocks Jul 2 00:43:17.752809 kernel: Freeing unused kernel memory: 36352K Jul 2 00:43:17.752823 kernel: Run /init as init process Jul 2 00:43:17.752829 kernel: with arguments: Jul 2 00:43:17.752836 kernel: /init Jul 2 00:43:17.752842 kernel: with environment: Jul 2 00:43:17.752849 kernel: HOME=/ Jul 2 00:43:17.752855 kernel: TERM=linux Jul 2 00:43:17.752862 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:43:17.752871 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:43:17.752881 systemd[1]: Detected virtualization kvm. Jul 2 00:43:17.752889 systemd[1]: Detected architecture arm64. Jul 2 00:43:17.752896 systemd[1]: Running in initrd. Jul 2 00:43:17.752903 systemd[1]: No hostname configured, using default hostname. Jul 2 00:43:17.752910 systemd[1]: Hostname set to . Jul 2 00:43:17.752917 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:43:17.752955 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:43:17.752963 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:43:17.752973 systemd[1]: Reached target cryptsetup.target. Jul 2 00:43:17.752980 systemd[1]: Reached target paths.target. Jul 2 00:43:17.752988 systemd[1]: Reached target slices.target. Jul 2 00:43:17.752995 systemd[1]: Reached target swap.target. Jul 2 00:43:17.753002 systemd[1]: Reached target timers.target. Jul 2 00:43:17.753009 systemd[1]: Listening on iscsid.socket. Jul 2 00:43:17.753016 systemd[1]: Listening on iscsiuio.socket. Jul 2 00:43:17.753025 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 00:43:17.753033 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 00:43:17.753041 systemd[1]: Listening on systemd-journald.socket. Jul 2 00:43:17.753048 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:43:17.753055 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:43:17.753063 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:43:17.753070 systemd[1]: Reached target sockets.target. Jul 2 00:43:17.753080 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:43:17.753088 systemd[1]: Finished network-cleanup.service. Jul 2 00:43:17.753096 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:43:17.753104 systemd[1]: Starting systemd-journald.service... Jul 2 00:43:17.753111 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:43:17.753119 systemd[1]: Starting systemd-resolved.service... Jul 2 00:43:17.753126 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 00:43:17.753134 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:43:17.753141 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:43:17.753148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 00:43:17.753155 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 00:43:17.753163 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 00:43:17.753171 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 00:43:17.753180 kernel: audit: type=1130 audit(1719880997.749:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.753192 systemd-journald[290]: Journal started Jul 2 00:43:17.753250 systemd-journald[290]: Runtime Journal (/run/log/journal/fe3b90f2f0df44ae931aeb3bbfb340cc) is 6.0M, max 48.7M, 42.6M free. Jul 2 00:43:17.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.744087 systemd-modules-load[291]: Inserted module 'overlay' Jul 2 00:43:17.754921 systemd[1]: Started systemd-journald.service. Jul 2 00:43:17.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.757941 kernel: audit: type=1130 audit(1719880997.755:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.770088 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:43:17.770124 kernel: Bridge firewalling registered Jul 2 00:43:17.770384 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 2 00:43:17.771348 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 00:43:17.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.774074 systemd-resolved[292]: Positive Trust Anchors: Jul 2 00:43:17.775195 kernel: audit: type=1130 audit(1719880997.771:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.774081 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:43:17.774108 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:43:17.775880 systemd[1]: Starting dracut-cmdline.service... Jul 2 00:43:17.780267 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 2 00:43:17.785484 kernel: SCSI subsystem initialized Jul 2 00:43:17.785504 kernel: audit: type=1130 audit(1719880997.782:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.781165 systemd[1]: Started systemd-resolved.service. Jul 2 00:43:17.782824 systemd[1]: Reached target nss-lookup.target. Jul 2 00:43:17.789956 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:43:17.789994 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:43:17.790004 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 00:43:17.791431 dracut-cmdline[309]: dracut-dracut-053 Jul 2 00:43:17.792904 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 2 00:43:17.793643 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:43:17.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.795170 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:43:17.800421 kernel: audit: type=1130 audit(1719880997.794:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.795446 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:43:17.803500 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:43:17.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.807112 kernel: audit: type=1130 audit(1719880997.803:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.860945 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:43:17.872953 kernel: iscsi: registered transport (tcp) Jul 2 00:43:17.887955 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:43:17.888005 kernel: QLogic iSCSI HBA Driver Jul 2 00:43:17.922998 systemd[1]: Finished dracut-cmdline.service. Jul 2 00:43:17.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.924499 systemd[1]: Starting dracut-pre-udev.service... Jul 2 00:43:17.927521 kernel: audit: type=1130 audit(1719880997.922:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:17.969943 kernel: raid6: neonx8 gen() 13798 MB/s Jul 2 00:43:17.986942 kernel: raid6: neonx8 xor() 10830 MB/s Jul 2 00:43:18.003938 kernel: raid6: neonx4 gen() 13557 MB/s Jul 2 00:43:18.020938 kernel: raid6: neonx4 xor() 11302 MB/s Jul 2 00:43:18.037936 kernel: raid6: neonx2 gen() 13069 MB/s Jul 2 00:43:18.054936 kernel: raid6: neonx2 xor() 10237 MB/s Jul 2 00:43:18.071933 kernel: raid6: neonx1 gen() 10554 MB/s Jul 2 00:43:18.088943 kernel: raid6: neonx1 xor() 8789 MB/s Jul 2 00:43:18.105941 kernel: raid6: int64x8 gen() 6272 MB/s Jul 2 00:43:18.122937 kernel: raid6: int64x8 xor() 3544 MB/s Jul 2 00:43:18.139937 kernel: raid6: int64x4 gen() 7221 MB/s Jul 2 00:43:18.156936 kernel: raid6: int64x4 xor() 3854 MB/s Jul 2 00:43:18.173948 kernel: raid6: int64x2 gen() 6155 MB/s Jul 2 00:43:18.190939 kernel: raid6: int64x2 xor() 3319 MB/s Jul 2 00:43:18.207937 kernel: raid6: int64x1 gen() 5043 MB/s Jul 2 00:43:18.225285 kernel: raid6: int64x1 xor() 2647 MB/s Jul 2 00:43:18.225299 kernel: raid6: using algorithm neonx8 gen() 13798 MB/s Jul 2 00:43:18.225307 kernel: raid6: .... xor() 10830 MB/s, rmw enabled Jul 2 00:43:18.225316 kernel: raid6: using neon recovery algorithm Jul 2 00:43:18.236942 kernel: xor: measuring software checksum speed Jul 2 00:43:18.237938 kernel: 8regs : 17315 MB/sec Jul 2 00:43:18.238955 kernel: 32regs : 20744 MB/sec Jul 2 00:43:18.239940 kernel: arm64_neon : 27741 MB/sec Jul 2 00:43:18.239951 kernel: xor: using function: arm64_neon (27741 MB/sec) Jul 2 00:43:18.303949 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 2 00:43:18.323521 systemd[1]: Finished dracut-pre-udev.service. Jul 2 00:43:18.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:18.326956 kernel: audit: type=1130 audit(1719880998.324:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:18.326980 kernel: audit: type=1334 audit(1719880998.326:10): prog-id=7 op=LOAD Jul 2 00:43:18.326000 audit: BPF prog-id=7 op=LOAD Jul 2 00:43:18.327412 systemd[1]: Starting systemd-udevd.service... Jul 2 00:43:18.326000 audit: BPF prog-id=8 op=LOAD Jul 2 00:43:18.353138 systemd-udevd[491]: Using default interface naming scheme 'v252'. Jul 2 00:43:18.356478 systemd[1]: Started systemd-udevd.service. Jul 2 00:43:18.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:18.358639 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 00:43:18.370027 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Jul 2 00:43:18.402238 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 00:43:18.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:18.403661 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:43:18.447154 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:43:18.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:18.472947 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:43:18.473109 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:43:18.474105 kernel: GPT:9289727 != 19775487 Jul 2 00:43:18.474126 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:43:18.475015 kernel: GPT:9289727 != 19775487 Jul 2 00:43:18.475033 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:43:18.475940 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:43:18.490946 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (547) Jul 2 00:43:18.497731 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 00:43:18.500383 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 00:43:18.501192 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 00:43:18.506032 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 00:43:18.509210 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 00:43:18.510742 systemd[1]: Starting disk-uuid.service... Jul 2 00:43:18.516305 disk-uuid[562]: Primary Header is updated. Jul 2 00:43:18.516305 disk-uuid[562]: Secondary Entries is updated. Jul 2 00:43:18.516305 disk-uuid[562]: Secondary Header is updated. Jul 2 00:43:18.518947 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:43:19.530670 disk-uuid[563]: The operation has completed successfully. Jul 2 00:43:19.532153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:43:19.559700 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:43:19.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.559807 systemd[1]: Finished disk-uuid.service. Jul 2 00:43:19.561509 systemd[1]: Starting verity-setup.service... Jul 2 00:43:19.576958 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:43:19.595448 systemd[1]: Found device dev-mapper-usr.device. Jul 2 00:43:19.597475 systemd[1]: Mounting sysusr-usr.mount... Jul 2 00:43:19.599894 systemd[1]: Finished verity-setup.service. Jul 2 00:43:19.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.641941 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 00:43:19.642033 systemd[1]: Mounted sysusr-usr.mount. Jul 2 00:43:19.642670 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 00:43:19.643400 systemd[1]: Starting ignition-setup.service... Jul 2 00:43:19.645006 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 00:43:19.651336 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:43:19.651372 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:43:19.651381 kernel: BTRFS info (device vda6): has skinny extents Jul 2 00:43:19.660046 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:43:19.666188 systemd[1]: Finished ignition-setup.service. Jul 2 00:43:19.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.667643 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 00:43:19.730013 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 00:43:19.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.731000 audit: BPF prog-id=9 op=LOAD Jul 2 00:43:19.732399 systemd[1]: Starting systemd-networkd.service... Jul 2 00:43:19.753335 systemd-networkd[739]: lo: Link UP Jul 2 00:43:19.753344 systemd-networkd[739]: lo: Gained carrier Jul 2 00:43:19.753854 systemd-networkd[739]: Enumeration completed Jul 2 00:43:19.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.754124 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:43:19.754247 systemd[1]: Started systemd-networkd.service. Jul 2 00:43:19.755340 systemd-networkd[739]: eth0: Link UP Jul 2 00:43:19.755344 systemd-networkd[739]: eth0: Gained carrier Jul 2 00:43:19.755761 systemd[1]: Reached target network.target. Jul 2 00:43:19.757859 systemd[1]: Starting iscsiuio.service... Jul 2 00:43:19.770814 ignition[650]: Ignition 2.14.0 Jul 2 00:43:19.770824 ignition[650]: Stage: fetch-offline Jul 2 00:43:19.770878 ignition[650]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:43:19.770886 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:43:19.771138 ignition[650]: parsed url from cmdline: "" Jul 2 00:43:19.774038 systemd[1]: Started iscsiuio.service. Jul 2 00:43:19.771142 ignition[650]: no config URL provided Jul 2 00:43:19.771146 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:43:19.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.775984 systemd[1]: Starting iscsid.service... Jul 2 00:43:19.771153 ignition[650]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:43:19.771172 ignition[650]: op(1): [started] loading QEMU firmware config module Jul 2 00:43:19.771177 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:43:19.779011 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:43:19.780989 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:43:19.780989 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 00:43:19.780989 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 00:43:19.780989 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 00:43:19.780989 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:43:19.780989 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 00:43:19.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.787046 ignition[650]: op(1): [finished] loading QEMU firmware config module Jul 2 00:43:19.787330 systemd[1]: Started iscsid.service. Jul 2 00:43:19.789471 systemd[1]: Starting dracut-initqueue.service... Jul 2 00:43:19.796978 ignition[650]: parsing config with SHA512: bf449cdaa9f04663412635b0255fa171a182f19af7c8ffb9a9b1e5d103deef51fc1913301dce998a05e99afecf91c24b5b402404a5a5544f247ba9a33c9b28e9 Jul 2 00:43:19.799963 systemd[1]: Finished dracut-initqueue.service. Jul 2 00:43:19.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.800677 systemd[1]: Reached target remote-fs-pre.target. Jul 2 00:43:19.801684 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:43:19.802973 systemd[1]: Reached target remote-fs.target. Jul 2 00:43:19.805267 systemd[1]: Starting dracut-pre-mount.service... Jul 2 00:43:19.807508 unknown[650]: fetched base config from "system" Jul 2 00:43:19.807520 unknown[650]: fetched user config from "qemu" Jul 2 00:43:19.807807 ignition[650]: fetch-offline: fetch-offline passed Jul 2 00:43:19.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.809054 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 00:43:19.807858 ignition[650]: Ignition finished successfully Jul 2 00:43:19.810283 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:43:19.811043 systemd[1]: Starting ignition-kargs.service... Jul 2 00:43:19.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.813879 systemd[1]: Finished dracut-pre-mount.service. Jul 2 00:43:19.819725 ignition[758]: Ignition 2.14.0 Jul 2 00:43:19.819736 ignition[758]: Stage: kargs Jul 2 00:43:19.819838 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:43:19.819848 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:43:19.821677 systemd[1]: Finished ignition-kargs.service. Jul 2 00:43:19.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.820540 ignition[758]: kargs: kargs passed Jul 2 00:43:19.820582 ignition[758]: Ignition finished successfully Jul 2 00:43:19.823669 systemd[1]: Starting ignition-disks.service... Jul 2 00:43:19.830573 ignition[766]: Ignition 2.14.0 Jul 2 00:43:19.830581 ignition[766]: Stage: disks Jul 2 00:43:19.830673 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:43:19.830683 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:43:19.832401 systemd[1]: Finished ignition-disks.service. Jul 2 00:43:19.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.831377 ignition[766]: disks: disks passed Jul 2 00:43:19.833351 systemd[1]: Reached target initrd-root-device.target. Jul 2 00:43:19.831420 ignition[766]: Ignition finished successfully Jul 2 00:43:19.834596 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:43:19.835708 systemd[1]: Reached target local-fs.target. Jul 2 00:43:19.836694 systemd[1]: Reached target sysinit.target. Jul 2 00:43:19.837710 systemd[1]: Reached target basic.target. Jul 2 00:43:19.839578 systemd[1]: Starting systemd-fsck-root.service... Jul 2 00:43:19.851149 systemd-fsck[775]: ROOT: clean, 614/553520 files, 56019/553472 blocks Jul 2 00:43:19.854386 systemd[1]: Finished systemd-fsck-root.service. Jul 2 00:43:19.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.855872 systemd[1]: Mounting sysroot.mount... Jul 2 00:43:19.860938 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 00:43:19.861793 systemd[1]: Mounted sysroot.mount. Jul 2 00:43:19.862442 systemd[1]: Reached target initrd-root-fs.target. Jul 2 00:43:19.864386 systemd[1]: Mounting sysroot-usr.mount... Jul 2 00:43:19.865215 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 00:43:19.865265 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:43:19.865291 systemd[1]: Reached target ignition-diskful.target. Jul 2 00:43:19.867608 systemd[1]: Mounted sysroot-usr.mount. Jul 2 00:43:19.869550 systemd[1]: Starting initrd-setup-root.service... Jul 2 00:43:19.874229 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:43:19.879196 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:43:19.883261 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:43:19.887595 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:43:19.915749 systemd[1]: Finished initrd-setup-root.service. Jul 2 00:43:19.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.917301 systemd[1]: Starting ignition-mount.service... Jul 2 00:43:19.918657 systemd[1]: Starting sysroot-boot.service... Jul 2 00:43:19.923334 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 00:43:19.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:19.934566 ignition[828]: INFO : Ignition 2.14.0 Jul 2 00:43:19.934566 ignition[828]: INFO : Stage: mount Jul 2 00:43:19.934566 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:43:19.934566 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:43:19.934566 ignition[828]: INFO : mount: mount passed Jul 2 00:43:19.934566 ignition[828]: INFO : Ignition finished successfully Jul 2 00:43:19.932825 systemd[1]: Finished ignition-mount.service. Jul 2 00:43:19.943682 systemd[1]: Finished sysroot-boot.service. Jul 2 00:43:19.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:20.606139 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 00:43:20.612138 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) Jul 2 00:43:20.612172 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:43:20.612181 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:43:20.613030 kernel: BTRFS info (device vda6): has skinny extents Jul 2 00:43:20.615693 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 00:43:20.617029 systemd[1]: Starting ignition-files.service... Jul 2 00:43:20.630657 ignition[857]: INFO : Ignition 2.14.0 Jul 2 00:43:20.630657 ignition[857]: INFO : Stage: files Jul 2 00:43:20.631974 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:43:20.631974 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:43:20.631974 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:43:20.636188 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:43:20.636188 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:43:20.639170 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:43:20.640137 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:43:20.641352 unknown[857]: wrote ssh authorized keys file for user: core Jul 2 00:43:20.642253 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:43:20.642253 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:43:20.642253 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:43:20.642253 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:43:20.642253 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:43:20.648664 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:43:20.648664 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:43:20.648664 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:43:20.648664 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 00:43:20.906984 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 2 00:43:21.205080 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:43:21.206747 ignition[857]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 2 00:43:21.207944 ignition[857]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:43:21.207944 ignition[857]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:43:21.207944 ignition[857]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 2 00:43:21.207944 ignition[857]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:43:21.207944 ignition[857]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:43:21.245990 ignition[857]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:43:21.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.251341 ignition[857]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:43:21.251341 ignition[857]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:43:21.251341 ignition[857]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:43:21.251341 ignition[857]: INFO : files: files passed Jul 2 00:43:21.251341 ignition[857]: INFO : Ignition finished successfully Jul 2 00:43:21.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.247474 systemd[1]: Finished ignition-files.service. Jul 2 00:43:21.249141 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 00:43:21.260864 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 2 00:43:21.249883 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 00:43:21.263651 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:43:21.250561 systemd[1]: Starting ignition-quench.service... Jul 2 00:43:21.255905 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 00:43:21.256858 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:43:21.256935 systemd[1]: Finished ignition-quench.service. Jul 2 00:43:21.257615 systemd[1]: Reached target ignition-complete.target. Jul 2 00:43:21.258888 systemd[1]: Starting initrd-parse-etc.service... Jul 2 00:43:21.270610 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:43:21.270694 systemd[1]: Finished initrd-parse-etc.service. Jul 2 00:43:21.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.272096 systemd[1]: Reached target initrd-fs.target. Jul 2 00:43:21.273065 systemd[1]: Reached target initrd.target. Jul 2 00:43:21.274106 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 00:43:21.274769 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 00:43:21.286281 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 00:43:21.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.287708 systemd[1]: Starting initrd-cleanup.service... Jul 2 00:43:21.296811 systemd[1]: Stopped target nss-lookup.target. Jul 2 00:43:21.297622 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 00:43:21.298788 systemd[1]: Stopped target timers.target. Jul 2 00:43:21.299937 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:43:21.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.300053 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 00:43:21.301173 systemd[1]: Stopped target initrd.target. Jul 2 00:43:21.302277 systemd[1]: Stopped target basic.target. Jul 2 00:43:21.303354 systemd[1]: Stopped target ignition-complete.target. Jul 2 00:43:21.304472 systemd[1]: Stopped target ignition-diskful.target. Jul 2 00:43:21.305604 systemd[1]: Stopped target initrd-root-device.target. Jul 2 00:43:21.306794 systemd[1]: Stopped target remote-fs.target. Jul 2 00:43:21.307989 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 00:43:21.309232 systemd[1]: Stopped target sysinit.target. Jul 2 00:43:21.310344 systemd[1]: Stopped target local-fs.target. Jul 2 00:43:21.311458 systemd[1]: Stopped target local-fs-pre.target. Jul 2 00:43:21.312624 systemd[1]: Stopped target swap.target. Jul 2 00:43:21.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.313678 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:43:21.313799 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 00:43:21.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.314955 systemd[1]: Stopped target cryptsetup.target. Jul 2 00:43:21.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.315855 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:43:21.315962 systemd[1]: Stopped dracut-initqueue.service. Jul 2 00:43:21.317203 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:43:21.317296 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 00:43:21.318409 systemd[1]: Stopped target paths.target. Jul 2 00:43:21.319417 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:43:21.322984 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 00:43:21.324422 systemd[1]: Stopped target slices.target. Jul 2 00:43:21.325586 systemd[1]: Stopped target sockets.target. Jul 2 00:43:21.326599 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:43:21.326669 systemd[1]: Closed iscsid.socket. Jul 2 00:43:21.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.327609 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:43:21.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.327708 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 00:43:21.328918 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:43:21.329018 systemd[1]: Stopped ignition-files.service. Jul 2 00:43:21.330811 systemd[1]: Stopping ignition-mount.service... Jul 2 00:43:21.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.332059 systemd[1]: Stopping iscsiuio.service... Jul 2 00:43:21.333046 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:43:21.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.333165 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 00:43:21.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.335236 systemd[1]: Stopping sysroot-boot.service... Jul 2 00:43:21.335780 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:43:21.341044 ignition[897]: INFO : Ignition 2.14.0 Jul 2 00:43:21.341044 ignition[897]: INFO : Stage: umount Jul 2 00:43:21.341044 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:43:21.341044 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:43:21.341044 ignition[897]: INFO : umount: umount passed Jul 2 00:43:21.341044 ignition[897]: INFO : Ignition finished successfully Jul 2 00:43:21.335899 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 00:43:21.337247 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:43:21.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.337337 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 00:43:21.340002 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 00:43:21.340107 systemd[1]: Stopped iscsiuio.service. Jul 2 00:43:21.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.348145 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:43:21.348741 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:43:21.348844 systemd[1]: Stopped ignition-mount.service. Jul 2 00:43:21.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.349951 systemd[1]: Stopped target network.target. Jul 2 00:43:21.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.350679 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:43:21.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.350710 systemd[1]: Closed iscsiuio.socket. Jul 2 00:43:21.351769 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:43:21.351808 systemd[1]: Stopped ignition-disks.service. Jul 2 00:43:21.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.352522 systemd-networkd[739]: eth0: Gained IPv6LL Jul 2 00:43:21.353619 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:43:21.353663 systemd[1]: Stopped ignition-kargs.service. Jul 2 00:43:21.354652 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:43:21.354686 systemd[1]: Stopped ignition-setup.service. Jul 2 00:43:21.356228 systemd[1]: Stopping systemd-networkd.service... Jul 2 00:43:21.357218 systemd[1]: Stopping systemd-resolved.service... Jul 2 00:43:21.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.358511 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:43:21.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.358600 systemd[1]: Finished initrd-cleanup.service. Jul 2 00:43:21.362979 systemd-networkd[739]: eth0: DHCPv6 lease lost Jul 2 00:43:21.364101 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:43:21.364189 systemd[1]: Stopped systemd-networkd.service. Jul 2 00:43:21.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.365358 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:43:21.371000 audit: BPF prog-id=9 op=UNLOAD Jul 2 00:43:21.371000 audit: BPF prog-id=6 op=UNLOAD Jul 2 00:43:21.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.365442 systemd[1]: Stopped systemd-resolved.service. Jul 2 00:43:21.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.366568 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:43:21.366594 systemd[1]: Closed systemd-networkd.socket. Jul 2 00:43:21.368521 systemd[1]: Stopping network-cleanup.service... Jul 2 00:43:21.369564 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:43:21.369634 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 00:43:21.371091 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:43:21.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.371136 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:43:21.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.372948 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:43:21.372988 systemd[1]: Stopped systemd-modules-load.service. Jul 2 00:43:21.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.374734 systemd[1]: Stopping systemd-udevd.service... Jul 2 00:43:21.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.378500 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 00:43:21.379060 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:43:21.379142 systemd[1]: Stopped sysroot-boot.service. Jul 2 00:43:21.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.380634 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:43:21.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.380685 systemd[1]: Stopped initrd-setup-root.service. Jul 2 00:43:21.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.382287 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:43:21.382384 systemd[1]: Stopped network-cleanup.service. Jul 2 00:43:21.383633 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:43:21.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.383773 systemd[1]: Stopped systemd-udevd.service. Jul 2 00:43:21.384851 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:43:21.384890 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 00:43:21.385631 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:43:21.385657 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 00:43:21.386938 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:43:21.386979 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 00:43:21.388150 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:43:21.388187 systemd[1]: Stopped dracut-cmdline.service. Jul 2 00:43:21.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.389178 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:43:21.389214 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 00:43:21.391093 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 00:43:21.392187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:43:21.392241 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 00:43:21.398405 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:43:21.398506 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 00:43:21.399583 systemd[1]: Reached target initrd-switch-root.target. Jul 2 00:43:21.401296 systemd[1]: Starting initrd-switch-root.service... Jul 2 00:43:21.412521 systemd[1]: Switching root. Jul 2 00:43:21.429293 iscsid[745]: iscsid shutting down. Jul 2 00:43:21.429825 systemd-journald[290]: Journal stopped Jul 2 00:43:23.506182 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 2 00:43:23.506238 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 00:43:23.506251 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 00:43:23.506261 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 00:43:23.506275 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:43:23.506287 kernel: SELinux: policy capability open_perms=1 Jul 2 00:43:23.506301 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:43:23.506311 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:43:23.506324 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:43:23.506335 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:43:23.506348 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:43:23.506358 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:43:23.506369 systemd[1]: Successfully loaded SELinux policy in 32.694ms. Jul 2 00:43:23.506382 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.143ms. Jul 2 00:43:23.506395 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:43:23.506409 systemd[1]: Detected virtualization kvm. Jul 2 00:43:23.506419 systemd[1]: Detected architecture arm64. Jul 2 00:43:23.506430 systemd[1]: Detected first boot. Jul 2 00:43:23.506440 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:43:23.506450 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 00:43:23.506461 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:43:23.506473 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:43:23.506485 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:43:23.506497 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:43:23.506508 kernel: kauditd_printk_skb: 78 callbacks suppressed Jul 2 00:43:23.506518 kernel: audit: type=1334 audit(1719881003.384:82): prog-id=12 op=LOAD Jul 2 00:43:23.506528 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 00:43:23.506538 kernel: audit: type=1334 audit(1719881003.384:83): prog-id=3 op=UNLOAD Jul 2 00:43:23.506547 kernel: audit: type=1334 audit(1719881003.384:84): prog-id=13 op=LOAD Jul 2 00:43:23.506559 systemd[1]: Stopped iscsid.service. Jul 2 00:43:23.506569 kernel: audit: type=1334 audit(1719881003.384:85): prog-id=14 op=LOAD Jul 2 00:43:23.506579 kernel: audit: type=1334 audit(1719881003.384:86): prog-id=4 op=UNLOAD Jul 2 00:43:23.506588 kernel: audit: type=1334 audit(1719881003.384:87): prog-id=5 op=UNLOAD Jul 2 00:43:23.506599 kernel: audit: type=1131 audit(1719881003.385:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.506609 kernel: audit: type=1131 audit(1719881003.396:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.506620 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:43:23.506631 systemd[1]: Stopped initrd-switch-root.service. Jul 2 00:43:23.506642 kernel: audit: type=1130 audit(1719881003.400:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.506653 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:43:23.506664 kernel: audit: type=1131 audit(1719881003.400:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.506675 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 00:43:23.506687 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 00:43:23.506697 systemd[1]: Created slice system-getty.slice. Jul 2 00:43:23.506708 systemd[1]: Created slice system-modprobe.slice. Jul 2 00:43:23.506726 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 00:43:23.506738 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 00:43:23.506748 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 00:43:23.506759 systemd[1]: Created slice user.slice. Jul 2 00:43:23.506769 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:43:23.506779 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 00:43:23.506790 systemd[1]: Set up automount boot.automount. Jul 2 00:43:23.506800 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 00:43:23.506812 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 00:43:23.506822 systemd[1]: Stopped target initrd-fs.target. Jul 2 00:43:23.506833 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 00:43:23.506844 systemd[1]: Reached target integritysetup.target. Jul 2 00:43:23.506855 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:43:23.506866 systemd[1]: Reached target remote-fs.target. Jul 2 00:43:23.506878 systemd[1]: Reached target slices.target. Jul 2 00:43:23.506889 systemd[1]: Reached target swap.target. Jul 2 00:43:23.506899 systemd[1]: Reached target torcx.target. Jul 2 00:43:23.506910 systemd[1]: Reached target veritysetup.target. Jul 2 00:43:23.506921 systemd[1]: Listening on systemd-coredump.socket. Jul 2 00:43:23.506938 systemd[1]: Listening on systemd-initctl.socket. Jul 2 00:43:23.506949 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:43:23.506960 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:43:23.506971 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:43:23.506983 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 00:43:23.506993 systemd[1]: Mounting dev-hugepages.mount... Jul 2 00:43:23.507004 systemd[1]: Mounting dev-mqueue.mount... Jul 2 00:43:23.507014 systemd[1]: Mounting media.mount... Jul 2 00:43:23.507024 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 00:43:23.507034 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 00:43:23.507045 systemd[1]: Mounting tmp.mount... Jul 2 00:43:23.507055 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 00:43:23.507065 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:23.507076 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:43:23.507087 systemd[1]: Starting modprobe@configfs.service... Jul 2 00:43:23.507099 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:23.507109 systemd[1]: Starting modprobe@drm.service... Jul 2 00:43:23.507119 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:23.507130 systemd[1]: Starting modprobe@fuse.service... Jul 2 00:43:23.507140 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:23.507151 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:43:23.507162 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:43:23.507173 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 00:43:23.507184 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:43:23.507194 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:43:23.507204 systemd[1]: Stopped systemd-journald.service. Jul 2 00:43:23.507214 kernel: fuse: init (API version 7.34) Jul 2 00:43:23.507225 systemd[1]: Starting systemd-journald.service... Jul 2 00:43:23.507237 kernel: loop: module loaded Jul 2 00:43:23.507247 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:43:23.507258 systemd[1]: Starting systemd-network-generator.service... Jul 2 00:43:23.507268 systemd[1]: Starting systemd-remount-fs.service... Jul 2 00:43:23.507278 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:43:23.507289 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:43:23.507300 systemd[1]: Stopped verity-setup.service. Jul 2 00:43:23.507310 systemd[1]: Mounted dev-hugepages.mount. Jul 2 00:43:23.507321 systemd[1]: Mounted dev-mqueue.mount. Jul 2 00:43:23.507331 systemd[1]: Mounted media.mount. Jul 2 00:43:23.507342 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 00:43:23.507352 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 00:43:23.507363 systemd[1]: Mounted tmp.mount. Jul 2 00:43:23.507376 systemd-journald[997]: Journal started Jul 2 00:43:23.507415 systemd-journald[997]: Runtime Journal (/run/log/journal/fe3b90f2f0df44ae931aeb3bbfb340cc) is 6.0M, max 48.7M, 42.6M free. Jul 2 00:43:21.490000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:43:21.555000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:43:21.555000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:43:21.555000 audit: BPF prog-id=10 op=LOAD Jul 2 00:43:21.555000 audit: BPF prog-id=10 op=UNLOAD Jul 2 00:43:21.555000 audit: BPF prog-id=11 op=LOAD Jul 2 00:43:21.555000 audit: BPF prog-id=11 op=UNLOAD Jul 2 00:43:21.617000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 00:43:21.617000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001cd8c4 a1=4000150de0 a2=40001570c0 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:43:21.617000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:43:21.618000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 00:43:21.618000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001cd9a9 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:43:21.618000 audit: CWD cwd="/" Jul 2 00:43:21.618000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:43:21.618000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:43:21.618000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:43:23.384000 audit: BPF prog-id=12 op=LOAD Jul 2 00:43:23.384000 audit: BPF prog-id=3 op=UNLOAD Jul 2 00:43:23.384000 audit: BPF prog-id=13 op=LOAD Jul 2 00:43:23.384000 audit: BPF prog-id=14 op=LOAD Jul 2 00:43:23.384000 audit: BPF prog-id=4 op=UNLOAD Jul 2 00:43:23.384000 audit: BPF prog-id=5 op=UNLOAD Jul 2 00:43:23.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.509791 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:43:23.509826 systemd[1]: Started systemd-journald.service. Jul 2 00:43:23.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.406000 audit: BPF prog-id=12 op=UNLOAD Jul 2 00:43:23.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.483000 audit: BPF prog-id=15 op=LOAD Jul 2 00:43:23.484000 audit: BPF prog-id=16 op=LOAD Jul 2 00:43:23.484000 audit: BPF prog-id=17 op=LOAD Jul 2 00:43:23.484000 audit: BPF prog-id=13 op=UNLOAD Jul 2 00:43:23.484000 audit: BPF prog-id=14 op=UNLOAD Jul 2 00:43:23.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.504000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 00:43:23.504000 audit[997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff4374070 a2=4000 a3=1 items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:43:23.504000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 00:43:23.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.383229 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:43:23.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.616673 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:43:23.383240 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 00:43:21.616952 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 00:43:23.386201 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:43:21.616973 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 00:43:23.510529 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:43:21.617004 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 00:43:21.617014 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 00:43:21.617043 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 00:43:23.510906 systemd[1]: Finished modprobe@configfs.service. Jul 2 00:43:21.617055 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 00:43:21.617245 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 00:43:21.617279 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 00:43:21.617290 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 00:43:21.617977 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 00:43:21.618009 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 00:43:23.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:21.618028 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 00:43:21.618042 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 00:43:21.618058 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 00:43:21.618072 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 00:43:23.062792 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:23Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:43:23.063060 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:23Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:43:23.063166 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:23Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:43:23.512008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:23.063325 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:23Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:43:23.063373 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:23Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 00:43:23.063431 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2024-07-02T00:43:23Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 00:43:23.512123 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:23.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.513159 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:43:23.513312 systemd[1]: Finished modprobe@drm.service. Jul 2 00:43:23.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.514116 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:23.514270 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:23.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.515136 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:43:23.515286 systemd[1]: Finished modprobe@fuse.service. Jul 2 00:43:23.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.516177 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:23.518120 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:23.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.519001 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:43:23.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.519900 systemd[1]: Finished systemd-network-generator.service. Jul 2 00:43:23.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.520922 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 00:43:23.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.521763 systemd[1]: Finished systemd-remount-fs.service. Jul 2 00:43:23.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.522785 systemd[1]: Reached target network-pre.target. Jul 2 00:43:23.524413 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 00:43:23.526155 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 00:43:23.526741 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:43:23.529910 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 00:43:23.531509 systemd[1]: Starting systemd-journal-flush.service... Jul 2 00:43:23.532465 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:43:23.533433 systemd[1]: Starting systemd-random-seed.service... Jul 2 00:43:23.534197 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:43:23.535252 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:43:23.536947 systemd[1]: Starting systemd-sysusers.service... Jul 2 00:43:23.540076 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 00:43:23.541093 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 00:43:23.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.542569 systemd[1]: Finished systemd-random-seed.service. Jul 2 00:43:23.543379 systemd[1]: Reached target first-boot-complete.target. Jul 2 00:43:23.543617 systemd-journald[997]: Time spent on flushing to /var/log/journal/fe3b90f2f0df44ae931aeb3bbfb340cc is 12.832ms for 973 entries. Jul 2 00:43:23.543617 systemd-journald[997]: System Journal (/var/log/journal/fe3b90f2f0df44ae931aeb3bbfb340cc) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:43:23.564142 systemd-journald[997]: Received client request to flush runtime journal. Jul 2 00:43:23.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.549947 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:43:23.565100 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:43:23.551687 systemd[1]: Starting systemd-udev-settle.service... Jul 2 00:43:23.561206 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:43:23.564987 systemd[1]: Finished systemd-journal-flush.service. Jul 2 00:43:23.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.569165 systemd[1]: Finished systemd-sysusers.service. Jul 2 00:43:23.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.893586 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 00:43:23.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.893000 audit: BPF prog-id=18 op=LOAD Jul 2 00:43:23.894000 audit: BPF prog-id=19 op=LOAD Jul 2 00:43:23.894000 audit: BPF prog-id=7 op=UNLOAD Jul 2 00:43:23.894000 audit: BPF prog-id=8 op=UNLOAD Jul 2 00:43:23.895573 systemd[1]: Starting systemd-udevd.service... Jul 2 00:43:23.912431 systemd-udevd[1034]: Using default interface naming scheme 'v252'. Jul 2 00:43:23.923362 systemd[1]: Started systemd-udevd.service. Jul 2 00:43:23.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:23.925000 audit: BPF prog-id=20 op=LOAD Jul 2 00:43:23.925988 systemd[1]: Starting systemd-networkd.service... Jul 2 00:43:23.929000 audit: BPF prog-id=21 op=LOAD Jul 2 00:43:23.929000 audit: BPF prog-id=22 op=LOAD Jul 2 00:43:23.929000 audit: BPF prog-id=23 op=LOAD Jul 2 00:43:23.931118 systemd[1]: Starting systemd-userdbd.service... Jul 2 00:43:23.942193 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 2 00:43:23.964204 systemd[1]: Started systemd-userdbd.service. Jul 2 00:43:23.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.008077 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 00:43:24.021262 systemd[1]: Finished systemd-udev-settle.service. Jul 2 00:43:24.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.023107 systemd[1]: Starting lvm2-activation-early.service... Jul 2 00:43:24.030331 systemd-networkd[1037]: lo: Link UP Jul 2 00:43:24.030345 systemd-networkd[1037]: lo: Gained carrier Jul 2 00:43:24.030687 systemd-networkd[1037]: Enumeration completed Jul 2 00:43:24.030776 systemd[1]: Started systemd-networkd.service. Jul 2 00:43:24.030805 systemd-networkd[1037]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:43:24.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.035588 systemd-networkd[1037]: eth0: Link UP Jul 2 00:43:24.035596 systemd-networkd[1037]: eth0: Gained carrier Jul 2 00:43:24.038826 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:43:24.059118 systemd-networkd[1037]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:43:24.063733 systemd[1]: Finished lvm2-activation-early.service. Jul 2 00:43:24.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.064519 systemd[1]: Reached target cryptsetup.target. Jul 2 00:43:24.066218 systemd[1]: Starting lvm2-activation.service... Jul 2 00:43:24.069647 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:43:24.096711 systemd[1]: Finished lvm2-activation.service. Jul 2 00:43:24.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.097464 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:43:24.098087 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:43:24.098118 systemd[1]: Reached target local-fs.target. Jul 2 00:43:24.098654 systemd[1]: Reached target machines.target. Jul 2 00:43:24.100323 systemd[1]: Starting ldconfig.service... Jul 2 00:43:24.101188 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.101237 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:24.102293 systemd[1]: Starting systemd-boot-update.service... Jul 2 00:43:24.103873 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 00:43:24.105706 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 00:43:24.107471 systemd[1]: Starting systemd-sysext.service... Jul 2 00:43:24.108738 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1070 (bootctl) Jul 2 00:43:24.109746 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 00:43:24.119506 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 00:43:24.123659 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 00:43:24.123860 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 00:43:24.125266 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 00:43:24.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.135956 kernel: loop0: detected capacity change from 0 to 194512 Jul 2 00:43:24.170604 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:43:24.171230 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 00:43:24.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.179509 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:43:24.184959 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) Jul 2 00:43:24.184959 systemd-fsck[1079]: /dev/vda1: 236 files, 117047/258078 clusters Jul 2 00:43:24.188620 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 00:43:24.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.191944 systemd[1]: Mounting boot.mount... Jul 2 00:43:24.194973 kernel: loop1: detected capacity change from 0 to 194512 Jul 2 00:43:24.198327 systemd[1]: Mounted boot.mount. Jul 2 00:43:24.207681 (sd-sysext)[1085]: Using extensions 'kubernetes'. Jul 2 00:43:24.208561 (sd-sysext)[1085]: Merged extensions into '/usr'. Jul 2 00:43:24.210867 systemd[1]: Finished systemd-boot-update.service. Jul 2 00:43:24.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.224901 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.226123 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:24.227902 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:24.229740 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:24.230377 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.230500 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:24.231233 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:24.231349 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:24.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.232548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:24.232650 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:24.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.233737 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:24.233844 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:24.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.234982 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:43:24.235076 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.274050 ldconfig[1069]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:43:24.278452 systemd[1]: Finished ldconfig.service. Jul 2 00:43:24.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.502131 systemd[1]: Mounting usr-share-oem.mount... Jul 2 00:43:24.507015 systemd[1]: Mounted usr-share-oem.mount. Jul 2 00:43:24.508599 systemd[1]: Finished systemd-sysext.service. Jul 2 00:43:24.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.510342 systemd[1]: Starting ensure-sysext.service... Jul 2 00:43:24.511824 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 00:43:24.515899 systemd[1]: Reloading. Jul 2 00:43:24.526286 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 00:43:24.528152 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:43:24.531056 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:43:24.549052 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2024-07-02T00:43:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:43:24.549080 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2024-07-02T00:43:24Z" level=info msg="torcx already run" Jul 2 00:43:24.604637 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:43:24.604659 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:43:24.619636 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:43:24.659000 audit: BPF prog-id=24 op=LOAD Jul 2 00:43:24.660000 audit: BPF prog-id=21 op=UNLOAD Jul 2 00:43:24.660000 audit: BPF prog-id=25 op=LOAD Jul 2 00:43:24.660000 audit: BPF prog-id=26 op=LOAD Jul 2 00:43:24.660000 audit: BPF prog-id=22 op=UNLOAD Jul 2 00:43:24.660000 audit: BPF prog-id=23 op=UNLOAD Jul 2 00:43:24.661000 audit: BPF prog-id=27 op=LOAD Jul 2 00:43:24.661000 audit: BPF prog-id=15 op=UNLOAD Jul 2 00:43:24.661000 audit: BPF prog-id=28 op=LOAD Jul 2 00:43:24.661000 audit: BPF prog-id=29 op=LOAD Jul 2 00:43:24.661000 audit: BPF prog-id=16 op=UNLOAD Jul 2 00:43:24.661000 audit: BPF prog-id=17 op=UNLOAD Jul 2 00:43:24.662000 audit: BPF prog-id=30 op=LOAD Jul 2 00:43:24.662000 audit: BPF prog-id=31 op=LOAD Jul 2 00:43:24.662000 audit: BPF prog-id=18 op=UNLOAD Jul 2 00:43:24.662000 audit: BPF prog-id=19 op=UNLOAD Jul 2 00:43:24.663000 audit: BPF prog-id=32 op=LOAD Jul 2 00:43:24.663000 audit: BPF prog-id=20 op=UNLOAD Jul 2 00:43:24.666399 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 00:43:24.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.670372 systemd[1]: Starting audit-rules.service... Jul 2 00:43:24.672207 systemd[1]: Starting clean-ca-certificates.service... Jul 2 00:43:24.673917 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 00:43:24.674000 audit: BPF prog-id=33 op=LOAD Jul 2 00:43:24.676000 audit: BPF prog-id=34 op=LOAD Jul 2 00:43:24.676226 systemd[1]: Starting systemd-resolved.service... Jul 2 00:43:24.678307 systemd[1]: Starting systemd-timesyncd.service... Jul 2 00:43:24.680006 systemd[1]: Starting systemd-update-utmp.service... Jul 2 00:43:24.684229 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.683000 audit[1156]: SYSTEM_BOOT pid=1156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.685334 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:24.687886 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:24.689742 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:24.690516 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.690649 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:24.691542 systemd[1]: Finished clean-ca-certificates.service. Jul 2 00:43:24.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.692663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:24.692792 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:24.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.693845 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:24.693965 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:24.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.694977 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:24.695085 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:24.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.697884 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 00:43:24.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.699086 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:43:24.699229 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.700481 systemd[1]: Starting systemd-update-done.service... Jul 2 00:43:24.701151 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:43:24.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.703128 systemd[1]: Finished systemd-update-utmp.service. Jul 2 00:43:24.704799 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.706114 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:24.707704 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:24.709367 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:24.709944 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.710063 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:24.710154 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:43:24.710921 systemd[1]: Finished systemd-update-done.service. Jul 2 00:43:24.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.711891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:24.712119 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:24.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.713106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:24.713216 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:24.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.714221 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:24.714327 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:24.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.717127 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.718562 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:24.720411 systemd[1]: Starting modprobe@drm.service... Jul 2 00:43:24.723138 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:24.724844 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:24.725602 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.725778 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:24.727087 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 00:43:24.727802 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:43:24.728807 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:24.728946 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:24.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.730054 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:43:24.730161 systemd[1]: Finished modprobe@drm.service. Jul 2 00:43:24.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.731261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:24.731366 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:24.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.732359 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:24.732474 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:24.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.733566 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:43:24.733657 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:43:24.738554 systemd[1]: Finished ensure-sysext.service. Jul 2 00:43:24.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:24.740000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 00:43:24.740000 audit[1182]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc77db4a0 a2=420 a3=0 items=0 ppid=1150 pid=1182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:43:24.741588 augenrules[1182]: No rules Jul 2 00:43:24.740000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 00:43:24.742263 systemd[1]: Started systemd-timesyncd.service. Jul 2 00:43:25.240115 systemd-timesyncd[1155]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:43:25.240186 systemd-timesyncd[1155]: Initial clock synchronization to Tue 2024-07-02 00:43:25.240035 UTC. Jul 2 00:43:25.240528 systemd[1]: Finished audit-rules.service. Jul 2 00:43:25.241252 systemd[1]: Reached target time-set.target. Jul 2 00:43:25.242673 systemd-resolved[1154]: Positive Trust Anchors: Jul 2 00:43:25.242683 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:43:25.242711 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:43:25.250247 systemd-resolved[1154]: Defaulting to hostname 'linux'. Jul 2 00:43:25.251553 systemd[1]: Started systemd-resolved.service. Jul 2 00:43:25.252207 systemd[1]: Reached target network.target. Jul 2 00:43:25.252787 systemd[1]: Reached target nss-lookup.target. Jul 2 00:43:25.253372 systemd[1]: Reached target sysinit.target. Jul 2 00:43:25.253965 systemd[1]: Started motdgen.path. Jul 2 00:43:25.254488 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 00:43:25.255425 systemd[1]: Started logrotate.timer. Jul 2 00:43:25.256033 systemd[1]: Started mdadm.timer. Jul 2 00:43:25.256529 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 00:43:25.257099 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:43:25.257125 systemd[1]: Reached target paths.target. Jul 2 00:43:25.257645 systemd[1]: Reached target timers.target. Jul 2 00:43:25.258458 systemd[1]: Listening on dbus.socket. Jul 2 00:43:25.260010 systemd[1]: Starting docker.socket... Jul 2 00:43:25.263022 systemd[1]: Listening on sshd.socket. Jul 2 00:43:25.263716 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:25.264185 systemd[1]: Listening on docker.socket. Jul 2 00:43:25.264811 systemd[1]: Reached target sockets.target. Jul 2 00:43:25.265389 systemd[1]: Reached target basic.target. Jul 2 00:43:25.265954 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:43:25.265987 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:43:25.266978 systemd[1]: Starting containerd.service... Jul 2 00:43:25.268587 systemd[1]: Starting dbus.service... Jul 2 00:43:25.270020 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 00:43:25.271802 systemd[1]: Starting extend-filesystems.service... Jul 2 00:43:25.272549 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 00:43:25.273897 systemd[1]: Starting motdgen.service... Jul 2 00:43:25.277536 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 00:43:25.279434 systemd[1]: Starting sshd-keygen.service... Jul 2 00:43:25.282070 systemd[1]: Starting systemd-logind.service... Jul 2 00:43:25.282721 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:25.282821 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:43:25.283353 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:43:25.284258 systemd[1]: Starting update-engine.service... Jul 2 00:43:25.286236 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 00:43:25.290672 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:43:25.290841 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 00:43:25.291057 jq[1206]: true Jul 2 00:43:25.292370 jq[1192]: false Jul 2 00:43:25.293835 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:43:25.294014 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 00:43:25.299590 jq[1210]: true Jul 2 00:43:25.300936 extend-filesystems[1193]: Found loop1 Jul 2 00:43:25.301710 extend-filesystems[1193]: Found vda Jul 2 00:43:25.302484 extend-filesystems[1193]: Found vda1 Jul 2 00:43:25.303484 extend-filesystems[1193]: Found vda2 Jul 2 00:43:25.304361 extend-filesystems[1193]: Found vda3 Jul 2 00:43:25.304361 extend-filesystems[1193]: Found usr Jul 2 00:43:25.305703 extend-filesystems[1193]: Found vda4 Jul 2 00:43:25.306458 extend-filesystems[1193]: Found vda6 Jul 2 00:43:25.306458 extend-filesystems[1193]: Found vda7 Jul 2 00:43:25.306458 extend-filesystems[1193]: Found vda9 Jul 2 00:43:25.306458 extend-filesystems[1193]: Checking size of /dev/vda9 Jul 2 00:43:25.308166 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:43:25.308349 systemd[1]: Finished motdgen.service. Jul 2 00:43:25.337979 extend-filesystems[1193]: Resized partition /dev/vda9 Jul 2 00:43:25.337953 dbus-daemon[1191]: [system] SELinux support is enabled Jul 2 00:43:25.338120 systemd[1]: Started dbus.service. Jul 2 00:43:25.341371 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:43:25.341393 systemd[1]: Reached target system-config.target. Jul 2 00:43:25.342206 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:43:25.342229 systemd[1]: Reached target user-config.target. Jul 2 00:43:25.349982 extend-filesystems[1235]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 00:43:25.366365 update_engine[1205]: I0702 00:43:25.366139 1205 main.cc:92] Flatcar Update Engine starting Jul 2 00:43:25.367346 bash[1238]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:43:25.368446 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 00:43:25.368932 update_engine[1205]: I0702 00:43:25.368902 1205 update_check_scheduler.cc:74] Next update check in 4m9s Jul 2 00:43:25.369263 systemd[1]: Started update-engine.service. Jul 2 00:43:25.371321 systemd[1]: Started locksmithd.service. Jul 2 00:43:25.380288 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:43:25.380219 systemd-logind[1202]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 00:43:25.383572 systemd-logind[1202]: New seat seat0. Jul 2 00:43:25.383679 env[1211]: time="2024-07-02T00:43:25.383611351Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 00:43:25.388105 systemd[1]: Started systemd-logind.service. Jul 2 00:43:25.401176 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:43:25.410955 env[1211]: time="2024-07-02T00:43:25.410850471Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:43:25.411566 extend-filesystems[1235]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:43:25.411566 extend-filesystems[1235]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:43:25.411566 extend-filesystems[1235]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:43:25.415674 extend-filesystems[1193]: Resized filesystem in /dev/vda9 Jul 2 00:43:25.416476 env[1211]: time="2024-07-02T00:43:25.415279391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:25.413450 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:43:25.413619 systemd[1]: Finished extend-filesystems.service. Jul 2 00:43:25.416961 env[1211]: time="2024-07-02T00:43:25.416666951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:43:25.416961 env[1211]: time="2024-07-02T00:43:25.416706151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:25.416961 env[1211]: time="2024-07-02T00:43:25.416934151Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:43:25.417030 env[1211]: time="2024-07-02T00:43:25.416963471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:25.417030 env[1211]: time="2024-07-02T00:43:25.416977671Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:43:25.417030 env[1211]: time="2024-07-02T00:43:25.416987071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:25.417088 env[1211]: time="2024-07-02T00:43:25.417063111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:25.417430 env[1211]: time="2024-07-02T00:43:25.417400231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:25.417543 env[1211]: time="2024-07-02T00:43:25.417521631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:43:25.417543 env[1211]: time="2024-07-02T00:43:25.417542431Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:43:25.417658 env[1211]: time="2024-07-02T00:43:25.417595591Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:43:25.417658 env[1211]: time="2024-07-02T00:43:25.417610791Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:43:25.420106 env[1211]: time="2024-07-02T00:43:25.420070151Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:43:25.420106 env[1211]: time="2024-07-02T00:43:25.420104551Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:43:25.420233 env[1211]: time="2024-07-02T00:43:25.420117791Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:43:25.420233 env[1211]: time="2024-07-02T00:43:25.420147671Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:43:25.420233 env[1211]: time="2024-07-02T00:43:25.420172751Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:43:25.420233 env[1211]: time="2024-07-02T00:43:25.420186391Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:43:25.420233 env[1211]: time="2024-07-02T00:43:25.420198471Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:43:25.420540 env[1211]: time="2024-07-02T00:43:25.420521231Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:43:25.420698 env[1211]: time="2024-07-02T00:43:25.420543471Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 00:43:25.420698 env[1211]: time="2024-07-02T00:43:25.420557191Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:43:25.420698 env[1211]: time="2024-07-02T00:43:25.420569871Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:43:25.420698 env[1211]: time="2024-07-02T00:43:25.420582991Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:43:25.420775 env[1211]: time="2024-07-02T00:43:25.420691511Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:43:25.420796 env[1211]: time="2024-07-02T00:43:25.420781351Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:43:25.421017 env[1211]: time="2024-07-02T00:43:25.421000111Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:43:25.421115 env[1211]: time="2024-07-02T00:43:25.421028911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421115 env[1211]: time="2024-07-02T00:43:25.421041751Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:43:25.421176 env[1211]: time="2024-07-02T00:43:25.421149151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421176 env[1211]: time="2024-07-02T00:43:25.421173631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421230 env[1211]: time="2024-07-02T00:43:25.421186151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421230 env[1211]: time="2024-07-02T00:43:25.421197111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421230 env[1211]: time="2024-07-02T00:43:25.421208431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421230 env[1211]: time="2024-07-02T00:43:25.421221111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421302 env[1211]: time="2024-07-02T00:43:25.421232151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421302 env[1211]: time="2024-07-02T00:43:25.421243631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421302 env[1211]: time="2024-07-02T00:43:25.421257071Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:43:25.421394 env[1211]: time="2024-07-02T00:43:25.421367751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421394 env[1211]: time="2024-07-02T00:43:25.421390631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421449 env[1211]: time="2024-07-02T00:43:25.421402511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421449 env[1211]: time="2024-07-02T00:43:25.421413271Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:43:25.421449 env[1211]: time="2024-07-02T00:43:25.421425271Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 00:43:25.421449 env[1211]: time="2024-07-02T00:43:25.421435351Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:43:25.421527 env[1211]: time="2024-07-02T00:43:25.421452391Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 00:43:25.421527 env[1211]: time="2024-07-02T00:43:25.421484591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:43:25.421716 env[1211]: time="2024-07-02T00:43:25.421664391Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.421719831Z" level=info msg="Connect containerd service" Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.421748591Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.422387671Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.422509671Z" level=info msg="Start subscribing containerd event" Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.422558871Z" level=info msg="Start recovering state" Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.422618191Z" level=info msg="Start event monitor" Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.422635591Z" level=info msg="Start snapshots syncer" Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.422646191Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.422653471Z" level=info msg="Start streaming server" Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.422793111Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.422888111Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:43:25.424189 env[1211]: time="2024-07-02T00:43:25.423832991Z" level=info msg="containerd successfully booted in 0.041257s" Jul 2 00:43:25.423023 systemd[1]: Started containerd.service. Jul 2 00:43:25.432251 locksmithd[1240]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:43:26.201364 systemd-networkd[1037]: eth0: Gained IPv6LL Jul 2 00:43:26.203116 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 00:43:26.204098 systemd[1]: Reached target network-online.target. Jul 2 00:43:26.206350 systemd[1]: Starting kubelet.service... Jul 2 00:43:26.753035 systemd[1]: Started kubelet.service. Jul 2 00:43:26.824599 sshd_keygen[1208]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:43:26.842085 systemd[1]: Finished sshd-keygen.service. Jul 2 00:43:26.844211 systemd[1]: Starting issuegen.service... Jul 2 00:43:26.848513 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:43:26.848659 systemd[1]: Finished issuegen.service. Jul 2 00:43:26.850585 systemd[1]: Starting systemd-user-sessions.service... Jul 2 00:43:26.856012 systemd[1]: Finished systemd-user-sessions.service. Jul 2 00:43:26.857937 systemd[1]: Started getty@tty1.service. Jul 2 00:43:26.859698 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 2 00:43:26.860558 systemd[1]: Reached target getty.target. Jul 2 00:43:26.861225 systemd[1]: Reached target multi-user.target. Jul 2 00:43:26.863197 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 00:43:26.869488 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 00:43:26.869634 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 00:43:26.870518 systemd[1]: Startup finished in 636ms (kernel) + 3.848s (initrd) + 4.919s (userspace) = 9.404s. Jul 2 00:43:27.304358 kubelet[1254]: E0702 00:43:27.304274 1254 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:43:27.306269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:43:27.306399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:43:31.275532 systemd[1]: Created slice system-sshd.slice. Jul 2 00:43:31.276623 systemd[1]: Started sshd@0-10.0.0.42:22-10.0.0.1:40378.service. Jul 2 00:43:31.320364 sshd[1278]: Accepted publickey for core from 10.0.0.1 port 40378 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:31.322302 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:31.330998 systemd-logind[1202]: New session 1 of user core. Jul 2 00:43:31.331897 systemd[1]: Created slice user-500.slice. Jul 2 00:43:31.333004 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 00:43:31.341043 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 00:43:31.342350 systemd[1]: Starting user@500.service... Jul 2 00:43:31.344889 (systemd)[1281]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:31.414507 systemd[1281]: Queued start job for default target default.target. Jul 2 00:43:31.414997 systemd[1281]: Reached target paths.target. Jul 2 00:43:31.415017 systemd[1281]: Reached target sockets.target. Jul 2 00:43:31.415028 systemd[1281]: Reached target timers.target. Jul 2 00:43:31.415039 systemd[1281]: Reached target basic.target. Jul 2 00:43:31.415090 systemd[1281]: Reached target default.target. Jul 2 00:43:31.415112 systemd[1281]: Startup finished in 64ms. Jul 2 00:43:31.415285 systemd[1]: Started user@500.service. Jul 2 00:43:31.416222 systemd[1]: Started session-1.scope. Jul 2 00:43:31.466714 systemd[1]: Started sshd@1-10.0.0.42:22-10.0.0.1:40380.service. Jul 2 00:43:31.502611 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 40380 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:31.504145 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:31.508223 systemd-logind[1202]: New session 2 of user core. Jul 2 00:43:31.509163 systemd[1]: Started session-2.scope. Jul 2 00:43:31.562901 sshd[1290]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:31.566445 systemd[1]: Started sshd@2-10.0.0.42:22-10.0.0.1:40390.service. Jul 2 00:43:31.566868 systemd[1]: sshd@1-10.0.0.42:22-10.0.0.1:40380.service: Deactivated successfully. Jul 2 00:43:31.567606 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:43:31.568082 systemd-logind[1202]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:43:31.568768 systemd-logind[1202]: Removed session 2. Jul 2 00:43:31.600343 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 40390 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:31.601605 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:31.604928 systemd-logind[1202]: New session 3 of user core. Jul 2 00:43:31.605852 systemd[1]: Started session-3.scope. Jul 2 00:43:31.654599 sshd[1295]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:31.657347 systemd[1]: sshd@2-10.0.0.42:22-10.0.0.1:40390.service: Deactivated successfully. Jul 2 00:43:31.657998 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:43:31.658620 systemd-logind[1202]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:43:31.659889 systemd[1]: Started sshd@3-10.0.0.42:22-10.0.0.1:40392.service. Jul 2 00:43:31.660656 systemd-logind[1202]: Removed session 3. Jul 2 00:43:31.690234 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 40392 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:31.691431 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:31.694539 systemd-logind[1202]: New session 4 of user core. Jul 2 00:43:31.695471 systemd[1]: Started session-4.scope. Jul 2 00:43:31.748497 sshd[1302]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:31.752377 systemd[1]: sshd@3-10.0.0.42:22-10.0.0.1:40392.service: Deactivated successfully. Jul 2 00:43:31.753025 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:43:31.753654 systemd-logind[1202]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:43:31.754872 systemd[1]: Started sshd@4-10.0.0.42:22-10.0.0.1:40398.service. Jul 2 00:43:31.755599 systemd-logind[1202]: Removed session 4. Jul 2 00:43:31.785812 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 40398 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:31.786961 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:31.790053 systemd-logind[1202]: New session 5 of user core. Jul 2 00:43:31.790959 systemd[1]: Started session-5.scope. Jul 2 00:43:31.849129 sudo[1311]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:43:31.849711 sudo[1311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:43:31.861401 systemd[1]: Starting coreos-metadata.service... Jul 2 00:43:31.867108 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:43:31.867274 systemd[1]: Finished coreos-metadata.service. Jul 2 00:43:32.522874 systemd[1]: Stopped kubelet.service. Jul 2 00:43:32.525206 systemd[1]: Starting kubelet.service... Jul 2 00:43:32.543977 systemd[1]: Reloading. Jul 2 00:43:32.606103 /usr/lib/systemd/system-generators/torcx-generator[1381]: time="2024-07-02T00:43:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:43:32.606133 /usr/lib/systemd/system-generators/torcx-generator[1381]: time="2024-07-02T00:43:32Z" level=info msg="torcx already run" Jul 2 00:43:32.734768 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:43:32.734789 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:43:32.750262 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:43:32.815935 systemd[1]: Started kubelet.service. Jul 2 00:43:32.819771 systemd[1]: Stopping kubelet.service... Jul 2 00:43:32.820331 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:43:32.820532 systemd[1]: Stopped kubelet.service. Jul 2 00:43:32.822370 systemd[1]: Starting kubelet.service... Jul 2 00:43:32.904281 systemd[1]: Started kubelet.service. Jul 2 00:43:32.948759 kubelet[1426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:43:32.948759 kubelet[1426]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:43:32.948759 kubelet[1426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:43:32.949226 kubelet[1426]: I0702 00:43:32.948824 1426 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:43:33.715935 kubelet[1426]: I0702 00:43:33.715898 1426 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:43:33.715935 kubelet[1426]: I0702 00:43:33.715928 1426 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:43:33.716129 kubelet[1426]: I0702 00:43:33.716115 1426 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:43:33.739192 kubelet[1426]: I0702 00:43:33.737794 1426 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:43:33.750225 kubelet[1426]: I0702 00:43:33.750197 1426 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:43:33.751415 kubelet[1426]: I0702 00:43:33.751389 1426 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:43:33.751703 kubelet[1426]: I0702 00:43:33.751680 1426 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:43:33.751822 kubelet[1426]: I0702 00:43:33.751810 1426 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:43:33.751874 kubelet[1426]: I0702 00:43:33.751866 1426 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:43:33.753097 kubelet[1426]: I0702 00:43:33.753075 1426 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:43:33.755525 kubelet[1426]: I0702 00:43:33.755501 1426 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:43:33.755620 kubelet[1426]: I0702 00:43:33.755608 1426 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:43:33.755687 kubelet[1426]: I0702 00:43:33.755677 1426 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:43:33.755770 kubelet[1426]: E0702 00:43:33.755729 1426 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:33.755804 kubelet[1426]: E0702 00:43:33.755767 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:33.755840 kubelet[1426]: I0702 00:43:33.755754 1426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:43:33.756658 kubelet[1426]: I0702 00:43:33.756638 1426 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 00:43:33.757198 kubelet[1426]: I0702 00:43:33.757179 1426 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:43:33.757830 kubelet[1426]: W0702 00:43:33.757811 1426 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:43:33.758698 kubelet[1426]: I0702 00:43:33.758678 1426 server.go:1256] "Started kubelet" Jul 2 00:43:33.760729 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 00:43:33.766844 kubelet[1426]: I0702 00:43:33.766817 1426 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:43:33.767922 kubelet[1426]: I0702 00:43:33.767902 1426 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:43:33.769254 kubelet[1426]: I0702 00:43:33.769227 1426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:43:33.769506 kubelet[1426]: I0702 00:43:33.769491 1426 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:43:33.770819 kubelet[1426]: I0702 00:43:33.770780 1426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:43:33.771920 kubelet[1426]: I0702 00:43:33.771871 1426 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:43:33.773099 kubelet[1426]: I0702 00:43:33.773070 1426 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:43:33.773199 kubelet[1426]: I0702 00:43:33.773179 1426 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:43:33.774506 kubelet[1426]: I0702 00:43:33.774479 1426 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:43:33.774692 kubelet[1426]: I0702 00:43:33.774664 1426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:43:33.775300 kubelet[1426]: E0702 00:43:33.775278 1426 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:43:33.781020 kubelet[1426]: I0702 00:43:33.780992 1426 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:43:33.788220 kubelet[1426]: E0702 00:43:33.788145 1426 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.42\" not found" node="10.0.0.42" Jul 2 00:43:33.792655 kubelet[1426]: I0702 00:43:33.792633 1426 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:43:33.792747 kubelet[1426]: I0702 00:43:33.792737 1426 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:43:33.792803 kubelet[1426]: I0702 00:43:33.792794 1426 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:43:33.858729 kubelet[1426]: I0702 00:43:33.858699 1426 policy_none.go:49] "None policy: Start" Jul 2 00:43:33.859576 kubelet[1426]: I0702 00:43:33.859558 1426 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:43:33.859685 kubelet[1426]: I0702 00:43:33.859673 1426 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:43:33.864712 systemd[1]: Created slice kubepods.slice. Jul 2 00:43:33.868964 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 00:43:33.871556 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 00:43:33.873148 kubelet[1426]: I0702 00:43:33.873119 1426 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.42" Jul 2 00:43:33.876790 kubelet[1426]: I0702 00:43:33.876763 1426 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.42" Jul 2 00:43:33.878957 kubelet[1426]: I0702 00:43:33.878927 1426 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:43:33.879205 kubelet[1426]: I0702 00:43:33.879181 1426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:43:33.889272 kubelet[1426]: I0702 00:43:33.889251 1426 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 2 00:43:33.889781 env[1211]: time="2024-07-02T00:43:33.889670071Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:43:33.890056 kubelet[1426]: I0702 00:43:33.889847 1426 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 2 00:43:33.935565 kubelet[1426]: I0702 00:43:33.935528 1426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:43:33.937195 kubelet[1426]: I0702 00:43:33.937171 1426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:43:33.937195 kubelet[1426]: I0702 00:43:33.937197 1426 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:43:33.937270 kubelet[1426]: I0702 00:43:33.937213 1426 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:43:33.937270 kubelet[1426]: E0702 00:43:33.937258 1426 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 00:43:34.426616 sudo[1311]: pam_unix(sudo:session): session closed for user root Jul 2 00:43:34.428289 sshd[1308]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:34.430537 systemd[1]: sshd@4-10.0.0.42:22-10.0.0.1:40398.service: Deactivated successfully. Jul 2 00:43:34.431219 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:43:34.431897 systemd-logind[1202]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:43:34.432732 systemd-logind[1202]: Removed session 5. Jul 2 00:43:34.718075 kubelet[1426]: I0702 00:43:34.717973 1426 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 2 00:43:34.718507 kubelet[1426]: W0702 00:43:34.718123 1426 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jul 2 00:43:34.718507 kubelet[1426]: W0702 00:43:34.718220 1426 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jul 2 00:43:34.718507 kubelet[1426]: W0702 00:43:34.718380 1426 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jul 2 00:43:34.756734 kubelet[1426]: E0702 00:43:34.756706 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:34.756734 kubelet[1426]: I0702 00:43:34.756716 1426 apiserver.go:52] "Watching apiserver" Jul 2 00:43:34.759998 kubelet[1426]: I0702 00:43:34.759970 1426 topology_manager.go:215] "Topology Admit Handler" podUID="b707a177-ac43-427b-8cdd-752dddf1134f" podNamespace="kube-system" podName="cilium-jfcrt" Jul 2 00:43:34.760236 kubelet[1426]: I0702 00:43:34.760218 1426 topology_manager.go:215] "Topology Admit Handler" podUID="b5790ad4-8c59-4f9d-b395-4bec2b09c7dd" podNamespace="kube-system" podName="kube-proxy-xdcmp" Jul 2 00:43:34.764669 systemd[1]: Created slice kubepods-burstable-podb707a177_ac43_427b_8cdd_752dddf1134f.slice. Jul 2 00:43:34.773834 kubelet[1426]: I0702 00:43:34.773805 1426 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:43:34.777805 systemd[1]: Created slice kubepods-besteffort-podb5790ad4_8c59_4f9d_b395_4bec2b09c7dd.slice. Jul 2 00:43:34.780754 kubelet[1426]: I0702 00:43:34.780729 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-etc-cni-netd\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.780831 kubelet[1426]: I0702 00:43:34.780769 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b707a177-ac43-427b-8cdd-752dddf1134f-hubble-tls\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.780831 kubelet[1426]: I0702 00:43:34.780793 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5790ad4-8c59-4f9d-b395-4bec2b09c7dd-kube-proxy\") pod \"kube-proxy-xdcmp\" (UID: \"b5790ad4-8c59-4f9d-b395-4bec2b09c7dd\") " pod="kube-system/kube-proxy-xdcmp" Jul 2 00:43:34.780831 kubelet[1426]: I0702 00:43:34.780813 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-cgroup\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.780897 kubelet[1426]: I0702 00:43:34.780836 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cni-path\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.780897 kubelet[1426]: I0702 00:43:34.780854 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-lib-modules\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.780897 kubelet[1426]: I0702 00:43:34.780872 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-xtables-lock\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.780897 kubelet[1426]: I0702 00:43:34.780892 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjbsz\" (UniqueName: \"kubernetes.io/projected/b707a177-ac43-427b-8cdd-752dddf1134f-kube-api-access-tjbsz\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.781001 kubelet[1426]: I0702 00:43:34.780912 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b707a177-ac43-427b-8cdd-752dddf1134f-clustermesh-secrets\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.781001 kubelet[1426]: I0702 00:43:34.780932 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-host-proc-sys-net\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.781001 kubelet[1426]: I0702 00:43:34.780961 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5790ad4-8c59-4f9d-b395-4bec2b09c7dd-xtables-lock\") pod \"kube-proxy-xdcmp\" (UID: \"b5790ad4-8c59-4f9d-b395-4bec2b09c7dd\") " pod="kube-system/kube-proxy-xdcmp" Jul 2 00:43:34.781001 kubelet[1426]: I0702 00:43:34.780982 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5790ad4-8c59-4f9d-b395-4bec2b09c7dd-lib-modules\") pod \"kube-proxy-xdcmp\" (UID: \"b5790ad4-8c59-4f9d-b395-4bec2b09c7dd\") " pod="kube-system/kube-proxy-xdcmp" Jul 2 00:43:34.781001 kubelet[1426]: I0702 00:43:34.780999 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-run\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.781103 kubelet[1426]: I0702 00:43:34.781018 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-bpf-maps\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.781103 kubelet[1426]: I0702 00:43:34.781036 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-hostproc\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.781103 kubelet[1426]: I0702 00:43:34.781055 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-config-path\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.781103 kubelet[1426]: I0702 00:43:34.781077 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-host-proc-sys-kernel\") pod \"cilium-jfcrt\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " pod="kube-system/cilium-jfcrt" Jul 2 00:43:34.781103 kubelet[1426]: I0702 00:43:34.781097 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jbg2\" (UniqueName: \"kubernetes.io/projected/b5790ad4-8c59-4f9d-b395-4bec2b09c7dd-kube-api-access-7jbg2\") pod \"kube-proxy-xdcmp\" (UID: \"b5790ad4-8c59-4f9d-b395-4bec2b09c7dd\") " pod="kube-system/kube-proxy-xdcmp" Jul 2 00:43:35.077712 kubelet[1426]: E0702 00:43:35.077593 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:35.079207 env[1211]: time="2024-07-02T00:43:35.079169071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jfcrt,Uid:b707a177-ac43-427b-8cdd-752dddf1134f,Namespace:kube-system,Attempt:0,}" Jul 2 00:43:35.092212 kubelet[1426]: E0702 00:43:35.092188 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:35.093629 env[1211]: time="2024-07-02T00:43:35.093347071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xdcmp,Uid:b5790ad4-8c59-4f9d-b395-4bec2b09c7dd,Namespace:kube-system,Attempt:0,}" Jul 2 00:43:35.579981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056635192.mount: Deactivated successfully. Jul 2 00:43:35.584505 env[1211]: time="2024-07-02T00:43:35.584459311Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:35.585670 env[1211]: time="2024-07-02T00:43:35.585643791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:35.588904 env[1211]: time="2024-07-02T00:43:35.588863791Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:35.590377 env[1211]: time="2024-07-02T00:43:35.590333871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:35.591084 env[1211]: time="2024-07-02T00:43:35.591035031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:35.593373 env[1211]: time="2024-07-02T00:43:35.593345951Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:35.594883 env[1211]: time="2024-07-02T00:43:35.594858951Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:35.597452 env[1211]: time="2024-07-02T00:43:35.597423951Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:35.619769 env[1211]: time="2024-07-02T00:43:35.619694151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:43:35.619769 env[1211]: time="2024-07-02T00:43:35.619740471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:43:35.619929 env[1211]: time="2024-07-02T00:43:35.619768631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:43:35.620309 env[1211]: time="2024-07-02T00:43:35.620257151Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4122a8a787d6dd34ce71950e2f73eeeedd7369c0eabe7aab1fb31b02984538c4 pid=1489 runtime=io.containerd.runc.v2 Jul 2 00:43:35.620397 env[1211]: time="2024-07-02T00:43:35.620344231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:43:35.620397 env[1211]: time="2024-07-02T00:43:35.620388631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:43:35.620397 env[1211]: time="2024-07-02T00:43:35.620401231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:43:35.620540 env[1211]: time="2024-07-02T00:43:35.620509911Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469 pid=1490 runtime=io.containerd.runc.v2 Jul 2 00:43:35.636981 systemd[1]: Started cri-containerd-4122a8a787d6dd34ce71950e2f73eeeedd7369c0eabe7aab1fb31b02984538c4.scope. Jul 2 00:43:35.637978 systemd[1]: Started cri-containerd-4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469.scope. Jul 2 00:43:35.680617 env[1211]: time="2024-07-02T00:43:35.680568791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xdcmp,Uid:b5790ad4-8c59-4f9d-b395-4bec2b09c7dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4122a8a787d6dd34ce71950e2f73eeeedd7369c0eabe7aab1fb31b02984538c4\"" Jul 2 00:43:35.680834 env[1211]: time="2024-07-02T00:43:35.680801551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jfcrt,Uid:b707a177-ac43-427b-8cdd-752dddf1134f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\"" Jul 2 00:43:35.681452 kubelet[1426]: E0702 00:43:35.681413 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:35.681948 kubelet[1426]: E0702 00:43:35.681923 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:35.682806 env[1211]: time="2024-07-02T00:43:35.682775671Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:43:35.757699 kubelet[1426]: E0702 00:43:35.757654 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:36.758827 kubelet[1426]: E0702 00:43:36.758772 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:37.759371 kubelet[1426]: E0702 00:43:37.759326 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:38.760046 kubelet[1426]: E0702 00:43:38.760000 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:39.125988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033395412.mount: Deactivated successfully. Jul 2 00:43:39.760168 kubelet[1426]: E0702 00:43:39.760104 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:40.760741 kubelet[1426]: E0702 00:43:40.760702 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:41.250840 env[1211]: time="2024-07-02T00:43:41.250742311Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:41.252478 env[1211]: time="2024-07-02T00:43:41.252442231Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:41.253874 env[1211]: time="2024-07-02T00:43:41.253848631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:41.254469 env[1211]: time="2024-07-02T00:43:41.254445271Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 00:43:41.255193 env[1211]: time="2024-07-02T00:43:41.254994391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 00:43:41.257542 env[1211]: time="2024-07-02T00:43:41.257505231Z" level=info msg="CreateContainer within sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:43:41.266981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109282371.mount: Deactivated successfully. Jul 2 00:43:41.268925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2440338150.mount: Deactivated successfully. Jul 2 00:43:41.271412 env[1211]: time="2024-07-02T00:43:41.271160911Z" level=info msg="CreateContainer within sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb\"" Jul 2 00:43:41.272295 env[1211]: time="2024-07-02T00:43:41.272252191Z" level=info msg="StartContainer for \"cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb\"" Jul 2 00:43:41.290397 systemd[1]: Started cri-containerd-cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb.scope. Jul 2 00:43:41.323116 env[1211]: time="2024-07-02T00:43:41.321607271Z" level=info msg="StartContainer for \"cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb\" returns successfully" Jul 2 00:43:41.354673 systemd[1]: cri-containerd-cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb.scope: Deactivated successfully. Jul 2 00:43:41.464419 env[1211]: time="2024-07-02T00:43:41.464366631Z" level=info msg="shim disconnected" id=cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb Jul 2 00:43:41.464419 env[1211]: time="2024-07-02T00:43:41.464408671Z" level=warning msg="cleaning up after shim disconnected" id=cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb namespace=k8s.io Jul 2 00:43:41.464419 env[1211]: time="2024-07-02T00:43:41.464417631Z" level=info msg="cleaning up dead shim" Jul 2 00:43:41.471816 env[1211]: time="2024-07-02T00:43:41.471778711Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:43:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1607 runtime=io.containerd.runc.v2\n" Jul 2 00:43:41.761247 kubelet[1426]: E0702 00:43:41.761175 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:41.958405 kubelet[1426]: E0702 00:43:41.958244 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:41.960314 env[1211]: time="2024-07-02T00:43:41.960270231Z" level=info msg="CreateContainer within sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:43:41.972847 env[1211]: time="2024-07-02T00:43:41.972792111Z" level=info msg="CreateContainer within sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b1a83a92b3955ee30e7fc145d46ec211bf3e8b0bfe1936ba35de794cb55e91f2\"" Jul 2 00:43:41.973284 env[1211]: time="2024-07-02T00:43:41.973260111Z" level=info msg="StartContainer for \"b1a83a92b3955ee30e7fc145d46ec211bf3e8b0bfe1936ba35de794cb55e91f2\"" Jul 2 00:43:41.989284 systemd[1]: Started cri-containerd-b1a83a92b3955ee30e7fc145d46ec211bf3e8b0bfe1936ba35de794cb55e91f2.scope. Jul 2 00:43:42.023547 env[1211]: time="2024-07-02T00:43:42.023447431Z" level=info msg="StartContainer for \"b1a83a92b3955ee30e7fc145d46ec211bf3e8b0bfe1936ba35de794cb55e91f2\" returns successfully" Jul 2 00:43:42.040420 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:43:42.040606 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:43:42.040786 systemd[1]: Stopping systemd-sysctl.service... Jul 2 00:43:42.042372 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:43:42.045930 systemd[1]: cri-containerd-b1a83a92b3955ee30e7fc145d46ec211bf3e8b0bfe1936ba35de794cb55e91f2.scope: Deactivated successfully. Jul 2 00:43:42.051217 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:43:42.071094 env[1211]: time="2024-07-02T00:43:42.071049871Z" level=info msg="shim disconnected" id=b1a83a92b3955ee30e7fc145d46ec211bf3e8b0bfe1936ba35de794cb55e91f2 Jul 2 00:43:42.071363 env[1211]: time="2024-07-02T00:43:42.071343591Z" level=warning msg="cleaning up after shim disconnected" id=b1a83a92b3955ee30e7fc145d46ec211bf3e8b0bfe1936ba35de794cb55e91f2 namespace=k8s.io Jul 2 00:43:42.071433 env[1211]: time="2024-07-02T00:43:42.071419351Z" level=info msg="cleaning up dead shim" Jul 2 00:43:42.078500 env[1211]: time="2024-07-02T00:43:42.078442071Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:43:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1671 runtime=io.containerd.runc.v2\n" Jul 2 00:43:42.265616 systemd[1]: run-containerd-runc-k8s.io-cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb-runc.GCMHVg.mount: Deactivated successfully. Jul 2 00:43:42.265706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb-rootfs.mount: Deactivated successfully. Jul 2 00:43:42.344203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount112805745.mount: Deactivated successfully. Jul 2 00:43:42.762256 kubelet[1426]: E0702 00:43:42.762141 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:42.791961 env[1211]: time="2024-07-02T00:43:42.791908591Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:42.793161 env[1211]: time="2024-07-02T00:43:42.793120871Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:42.794451 env[1211]: time="2024-07-02T00:43:42.794427111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:42.795512 env[1211]: time="2024-07-02T00:43:42.795490511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:42.795838 env[1211]: time="2024-07-02T00:43:42.795813351Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jul 2 00:43:42.797784 env[1211]: time="2024-07-02T00:43:42.797753431Z" level=info msg="CreateContainer within sandbox \"4122a8a787d6dd34ce71950e2f73eeeedd7369c0eabe7aab1fb31b02984538c4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:43:42.813131 env[1211]: time="2024-07-02T00:43:42.813085351Z" level=info msg="CreateContainer within sandbox \"4122a8a787d6dd34ce71950e2f73eeeedd7369c0eabe7aab1fb31b02984538c4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1eb0b6f7f53e73f3fb4732fa2f9cfcf6b0125cce535abe68731d98e93557302a\"" Jul 2 00:43:42.813741 env[1211]: time="2024-07-02T00:43:42.813689071Z" level=info msg="StartContainer for \"1eb0b6f7f53e73f3fb4732fa2f9cfcf6b0125cce535abe68731d98e93557302a\"" Jul 2 00:43:42.827419 systemd[1]: Started cri-containerd-1eb0b6f7f53e73f3fb4732fa2f9cfcf6b0125cce535abe68731d98e93557302a.scope. Jul 2 00:43:42.863080 env[1211]: time="2024-07-02T00:43:42.863031511Z" level=info msg="StartContainer for \"1eb0b6f7f53e73f3fb4732fa2f9cfcf6b0125cce535abe68731d98e93557302a\" returns successfully" Jul 2 00:43:42.961318 kubelet[1426]: E0702 00:43:42.960831 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:42.962866 kubelet[1426]: E0702 00:43:42.962847 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:42.964603 env[1211]: time="2024-07-02T00:43:42.964550591Z" level=info msg="CreateContainer within sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:43:42.998570 env[1211]: time="2024-07-02T00:43:42.998518751Z" level=info msg="CreateContainer within sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0fffac8d88a37a01f0ba8b026f466e5176b249ef0b38ac17f800c3851fcde20f\"" Jul 2 00:43:42.999043 env[1211]: time="2024-07-02T00:43:42.999004191Z" level=info msg="StartContainer for \"0fffac8d88a37a01f0ba8b026f466e5176b249ef0b38ac17f800c3851fcde20f\"" Jul 2 00:43:43.002324 kubelet[1426]: I0702 00:43:43.002265 1426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xdcmp" podStartSLOduration=2.888747791 podStartE2EDuration="10.002205231s" podCreationTimestamp="2024-07-02 00:43:33 +0000 UTC" firstStartedPulling="2024-07-02 00:43:35.682588511 +0000 UTC m=+2.774493081" lastFinishedPulling="2024-07-02 00:43:42.796045951 +0000 UTC m=+9.887950521" observedRunningTime="2024-07-02 00:43:42.986128551 +0000 UTC m=+10.078033121" watchObservedRunningTime="2024-07-02 00:43:43.002205231 +0000 UTC m=+10.094109801" Jul 2 00:43:43.014997 systemd[1]: Started cri-containerd-0fffac8d88a37a01f0ba8b026f466e5176b249ef0b38ac17f800c3851fcde20f.scope. Jul 2 00:43:43.057899 env[1211]: time="2024-07-02T00:43:43.057848351Z" level=info msg="StartContainer for \"0fffac8d88a37a01f0ba8b026f466e5176b249ef0b38ac17f800c3851fcde20f\" returns successfully" Jul 2 00:43:43.064724 systemd[1]: cri-containerd-0fffac8d88a37a01f0ba8b026f466e5176b249ef0b38ac17f800c3851fcde20f.scope: Deactivated successfully. Jul 2 00:43:43.183889 env[1211]: time="2024-07-02T00:43:43.183827591Z" level=info msg="shim disconnected" id=0fffac8d88a37a01f0ba8b026f466e5176b249ef0b38ac17f800c3851fcde20f Jul 2 00:43:43.183889 env[1211]: time="2024-07-02T00:43:43.183876511Z" level=warning msg="cleaning up after shim disconnected" id=0fffac8d88a37a01f0ba8b026f466e5176b249ef0b38ac17f800c3851fcde20f namespace=k8s.io Jul 2 00:43:43.183889 env[1211]: time="2024-07-02T00:43:43.183885391Z" level=info msg="cleaning up dead shim" Jul 2 00:43:43.192238 env[1211]: time="2024-07-02T00:43:43.192176351Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:43:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1824 runtime=io.containerd.runc.v2\n" Jul 2 00:43:43.265347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684583000.mount: Deactivated successfully. Jul 2 00:43:43.762313 kubelet[1426]: E0702 00:43:43.762252 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:43.966534 kubelet[1426]: E0702 00:43:43.966376 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:43.966534 kubelet[1426]: E0702 00:43:43.966467 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:43.968427 env[1211]: time="2024-07-02T00:43:43.968389511Z" level=info msg="CreateContainer within sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:43:43.978667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928874113.mount: Deactivated successfully. Jul 2 00:43:43.987687 env[1211]: time="2024-07-02T00:43:43.987639751Z" level=info msg="CreateContainer within sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7\"" Jul 2 00:43:43.988249 env[1211]: time="2024-07-02T00:43:43.988221311Z" level=info msg="StartContainer for \"a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7\"" Jul 2 00:43:44.002819 systemd[1]: Started cri-containerd-a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7.scope. Jul 2 00:43:44.030807 systemd[1]: cri-containerd-a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7.scope: Deactivated successfully. Jul 2 00:43:44.031479 env[1211]: time="2024-07-02T00:43:44.031364751Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb707a177_ac43_427b_8cdd_752dddf1134f.slice/cri-containerd-a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7.scope/memory.events\": no such file or directory" Jul 2 00:43:44.033272 env[1211]: time="2024-07-02T00:43:44.033238031Z" level=info msg="StartContainer for \"a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7\" returns successfully" Jul 2 00:43:44.050234 env[1211]: time="2024-07-02T00:43:44.050192071Z" level=info msg="shim disconnected" id=a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7 Jul 2 00:43:44.050234 env[1211]: time="2024-07-02T00:43:44.050235911Z" level=warning msg="cleaning up after shim disconnected" id=a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7 namespace=k8s.io Jul 2 00:43:44.050425 env[1211]: time="2024-07-02T00:43:44.050245071Z" level=info msg="cleaning up dead shim" Jul 2 00:43:44.057186 env[1211]: time="2024-07-02T00:43:44.057137991Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:43:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1944 runtime=io.containerd.runc.v2\n" Jul 2 00:43:44.264778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7-rootfs.mount: Deactivated successfully. Jul 2 00:43:44.762440 kubelet[1426]: E0702 00:43:44.762404 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:44.969460 kubelet[1426]: E0702 00:43:44.969290 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:44.971490 env[1211]: time="2024-07-02T00:43:44.971443511Z" level=info msg="CreateContainer within sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:43:44.984942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114327406.mount: Deactivated successfully. Jul 2 00:43:44.988530 env[1211]: time="2024-07-02T00:43:44.988475031Z" level=info msg="CreateContainer within sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536\"" Jul 2 00:43:44.989229 env[1211]: time="2024-07-02T00:43:44.989144671Z" level=info msg="StartContainer for \"b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536\"" Jul 2 00:43:45.004978 systemd[1]: Started cri-containerd-b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536.scope. Jul 2 00:43:45.041104 env[1211]: time="2024-07-02T00:43:45.041001911Z" level=info msg="StartContainer for \"b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536\" returns successfully" Jul 2 00:43:45.169964 kubelet[1426]: I0702 00:43:45.169880 1426 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:43:45.314202 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:43:45.577239 kernel: Initializing XFRM netlink socket Jul 2 00:43:45.580180 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:43:45.763485 kubelet[1426]: E0702 00:43:45.763422 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:45.974188 kubelet[1426]: E0702 00:43:45.974083 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:45.993698 kubelet[1426]: I0702 00:43:45.993650 1426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jfcrt" podStartSLOduration=7.421141791 podStartE2EDuration="12.993602631s" podCreationTimestamp="2024-07-02 00:43:33 +0000 UTC" firstStartedPulling="2024-07-02 00:43:35.682367511 +0000 UTC m=+2.774272081" lastFinishedPulling="2024-07-02 00:43:41.254828391 +0000 UTC m=+8.346732921" observedRunningTime="2024-07-02 00:43:45.990267831 +0000 UTC m=+13.082172401" watchObservedRunningTime="2024-07-02 00:43:45.993602631 +0000 UTC m=+13.085507201" Jul 2 00:43:46.764580 kubelet[1426]: E0702 00:43:46.764534 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:46.797629 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 00:43:46.797724 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 00:43:46.795860 systemd-networkd[1037]: cilium_host: Link UP Jul 2 00:43:46.795986 systemd-networkd[1037]: cilium_net: Link UP Jul 2 00:43:46.798353 systemd-networkd[1037]: cilium_net: Gained carrier Jul 2 00:43:46.798538 systemd-networkd[1037]: cilium_host: Gained carrier Jul 2 00:43:46.883371 systemd-networkd[1037]: cilium_vxlan: Link UP Jul 2 00:43:46.883376 systemd-networkd[1037]: cilium_vxlan: Gained carrier Jul 2 00:43:46.975776 kubelet[1426]: E0702 00:43:46.975742 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:47.113311 systemd-networkd[1037]: cilium_net: Gained IPv6LL Jul 2 00:43:47.166195 kernel: NET: Registered PF_ALG protocol family Jul 2 00:43:47.705320 systemd-networkd[1037]: cilium_host: Gained IPv6LL Jul 2 00:43:47.765506 kubelet[1426]: E0702 00:43:47.765456 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:47.768778 systemd-networkd[1037]: lxc_health: Link UP Jul 2 00:43:47.769580 systemd-networkd[1037]: lxc_health: Gained carrier Jul 2 00:43:47.770188 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:43:47.978258 kubelet[1426]: E0702 00:43:47.978130 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:48.217269 systemd-networkd[1037]: cilium_vxlan: Gained IPv6LL Jul 2 00:43:48.765737 kubelet[1426]: E0702 00:43:48.765691 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:48.985300 systemd-networkd[1037]: lxc_health: Gained IPv6LL Jul 2 00:43:49.705670 kubelet[1426]: E0702 00:43:49.705592 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:49.767361 kubelet[1426]: E0702 00:43:49.767323 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:49.980793 kubelet[1426]: E0702 00:43:49.980676 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:50.441373 kubelet[1426]: I0702 00:43:50.441261 1426 topology_manager.go:215] "Topology Admit Handler" podUID="d3d98210-c95d-49fb-aa90-4f9487cd57a1" podNamespace="default" podName="nginx-deployment-6d5f899847-mg862" Jul 2 00:43:50.445760 systemd[1]: Created slice kubepods-besteffort-podd3d98210_c95d_49fb_aa90_4f9487cd57a1.slice. Jul 2 00:43:50.469457 kubelet[1426]: I0702 00:43:50.469414 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-529dn\" (UniqueName: \"kubernetes.io/projected/d3d98210-c95d-49fb-aa90-4f9487cd57a1-kube-api-access-529dn\") pod \"nginx-deployment-6d5f899847-mg862\" (UID: \"d3d98210-c95d-49fb-aa90-4f9487cd57a1\") " pod="default/nginx-deployment-6d5f899847-mg862" Jul 2 00:43:50.749038 env[1211]: time="2024-07-02T00:43:50.748927831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-mg862,Uid:d3d98210-c95d-49fb-aa90-4f9487cd57a1,Namespace:default,Attempt:0,}" Jul 2 00:43:50.768043 kubelet[1426]: E0702 00:43:50.768010 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:50.931187 systemd-networkd[1037]: lxcd74e4eb50c56: Link UP Jul 2 00:43:50.938030 kernel: eth0: renamed from tmp54b1b Jul 2 00:43:50.944916 systemd-networkd[1037]: lxcd74e4eb50c56: Gained carrier Jul 2 00:43:50.945751 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 00:43:50.945824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd74e4eb50c56: link becomes ready Jul 2 00:43:50.982823 kubelet[1426]: E0702 00:43:50.982790 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:51.769209 kubelet[1426]: E0702 00:43:51.769144 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:52.267068 env[1211]: time="2024-07-02T00:43:52.266941431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:43:52.267068 env[1211]: time="2024-07-02T00:43:52.266982831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:43:52.267440 env[1211]: time="2024-07-02T00:43:52.266994311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:43:52.267727 env[1211]: time="2024-07-02T00:43:52.267480591Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54b1b24a2ff036479452eb4eaee76a4fb5db45cd1dbf065d8f26538f53478ceb pid=2497 runtime=io.containerd.runc.v2 Jul 2 00:43:52.281272 systemd[1]: Started cri-containerd-54b1b24a2ff036479452eb4eaee76a4fb5db45cd1dbf065d8f26538f53478ceb.scope. Jul 2 00:43:52.365275 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:43:52.384142 env[1211]: time="2024-07-02T00:43:52.384092111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-mg862,Uid:d3d98210-c95d-49fb-aa90-4f9487cd57a1,Namespace:default,Attempt:0,} returns sandbox id \"54b1b24a2ff036479452eb4eaee76a4fb5db45cd1dbf065d8f26538f53478ceb\"" Jul 2 00:43:52.385606 env[1211]: time="2024-07-02T00:43:52.385562551Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 00:43:52.769907 kubelet[1426]: E0702 00:43:52.769866 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:52.953364 systemd-networkd[1037]: lxcd74e4eb50c56: Gained IPv6LL Jul 2 00:43:53.756593 kubelet[1426]: E0702 00:43:53.756546 1426 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:53.770859 kubelet[1426]: E0702 00:43:53.770822 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:54.385846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800546105.mount: Deactivated successfully. Jul 2 00:43:54.771268 kubelet[1426]: E0702 00:43:54.771224 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:55.557974 env[1211]: time="2024-07-02T00:43:55.557914071Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:55.559319 env[1211]: time="2024-07-02T00:43:55.559290711Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d3caadc252cc3b24921aae8c484cb83879b0b39cb20bb8d23a3a54872427653,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:55.562093 env[1211]: time="2024-07-02T00:43:55.562055791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:55.566318 env[1211]: time="2024-07-02T00:43:55.566288431Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:55.566984 env[1211]: time="2024-07-02T00:43:55.566942151Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:2d3caadc252cc3b24921aae8c484cb83879b0b39cb20bb8d23a3a54872427653\"" Jul 2 00:43:55.569236 env[1211]: time="2024-07-02T00:43:55.569204911Z" level=info msg="CreateContainer within sandbox \"54b1b24a2ff036479452eb4eaee76a4fb5db45cd1dbf065d8f26538f53478ceb\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 2 00:43:55.578502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount7344279.mount: Deactivated successfully. Jul 2 00:43:55.582689 env[1211]: time="2024-07-02T00:43:55.582656951Z" level=info msg="CreateContainer within sandbox \"54b1b24a2ff036479452eb4eaee76a4fb5db45cd1dbf065d8f26538f53478ceb\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"9b08d9a6e0d21dcfd173c04a2176b69d04f1ef372181968feffdbc1edaaed33e\"" Jul 2 00:43:55.583354 env[1211]: time="2024-07-02T00:43:55.583325191Z" level=info msg="StartContainer for \"9b08d9a6e0d21dcfd173c04a2176b69d04f1ef372181968feffdbc1edaaed33e\"" Jul 2 00:43:55.600303 systemd[1]: Started cri-containerd-9b08d9a6e0d21dcfd173c04a2176b69d04f1ef372181968feffdbc1edaaed33e.scope. Jul 2 00:43:55.645284 env[1211]: time="2024-07-02T00:43:55.645228391Z" level=info msg="StartContainer for \"9b08d9a6e0d21dcfd173c04a2176b69d04f1ef372181968feffdbc1edaaed33e\" returns successfully" Jul 2 00:43:55.772339 kubelet[1426]: E0702 00:43:55.772279 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:55.998177 kubelet[1426]: I0702 00:43:55.998133 1426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-mg862" podStartSLOduration=2.815640591 podStartE2EDuration="5.998096111s" podCreationTimestamp="2024-07-02 00:43:50 +0000 UTC" firstStartedPulling="2024-07-02 00:43:52.385202191 +0000 UTC m=+19.477106761" lastFinishedPulling="2024-07-02 00:43:55.567657711 +0000 UTC m=+22.659562281" observedRunningTime="2024-07-02 00:43:55.998026711 +0000 UTC m=+23.089931281" watchObservedRunningTime="2024-07-02 00:43:55.998096111 +0000 UTC m=+23.090000641" Jul 2 00:43:56.773468 kubelet[1426]: E0702 00:43:56.773428 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:57.774364 kubelet[1426]: E0702 00:43:57.774311 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:58.774684 kubelet[1426]: E0702 00:43:58.774641 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:43:59.775052 kubelet[1426]: E0702 00:43:59.775004 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:00.776031 kubelet[1426]: E0702 00:44:00.775986 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:01.776265 kubelet[1426]: E0702 00:44:01.776228 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:02.501025 kubelet[1426]: I0702 00:44:02.500284 1426 topology_manager.go:215] "Topology Admit Handler" podUID="c1d29406-e81f-4099-b306-0938fd2cb95a" podNamespace="default" podName="nfs-server-provisioner-0" Jul 2 00:44:02.507779 systemd[1]: Created slice kubepods-besteffort-podc1d29406_e81f_4099_b306_0938fd2cb95a.slice. Jul 2 00:44:02.536970 kubelet[1426]: I0702 00:44:02.536831 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jglp\" (UniqueName: \"kubernetes.io/projected/c1d29406-e81f-4099-b306-0938fd2cb95a-kube-api-access-5jglp\") pod \"nfs-server-provisioner-0\" (UID: \"c1d29406-e81f-4099-b306-0938fd2cb95a\") " pod="default/nfs-server-provisioner-0" Jul 2 00:44:02.536970 kubelet[1426]: I0702 00:44:02.536875 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c1d29406-e81f-4099-b306-0938fd2cb95a-data\") pod \"nfs-server-provisioner-0\" (UID: \"c1d29406-e81f-4099-b306-0938fd2cb95a\") " pod="default/nfs-server-provisioner-0" Jul 2 00:44:02.777064 kubelet[1426]: E0702 00:44:02.776944 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:02.814177 env[1211]: time="2024-07-02T00:44:02.814115958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c1d29406-e81f-4099-b306-0938fd2cb95a,Namespace:default,Attempt:0,}" Jul 2 00:44:02.846032 systemd-networkd[1037]: lxc6bff45457fe0: Link UP Jul 2 00:44:02.860100 kernel: eth0: renamed from tmp594df Jul 2 00:44:02.881430 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 00:44:02.881521 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6bff45457fe0: link becomes ready Jul 2 00:44:02.882213 systemd-networkd[1037]: lxc6bff45457fe0: Gained carrier Jul 2 00:44:03.126533 env[1211]: time="2024-07-02T00:44:03.126396020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:03.126533 env[1211]: time="2024-07-02T00:44:03.126499942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:03.126681 env[1211]: time="2024-07-02T00:44:03.126529983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:03.126981 env[1211]: time="2024-07-02T00:44:03.126951511Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/594df11edd0e708e98ad51a902944ef376b1108409d7898b0c0d6495fe97cd6d pid=2627 runtime=io.containerd.runc.v2 Jul 2 00:44:03.139031 systemd[1]: Started cri-containerd-594df11edd0e708e98ad51a902944ef376b1108409d7898b0c0d6495fe97cd6d.scope. Jul 2 00:44:03.161020 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:44:03.178093 env[1211]: time="2024-07-02T00:44:03.178043808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c1d29406-e81f-4099-b306-0938fd2cb95a,Namespace:default,Attempt:0,} returns sandbox id \"594df11edd0e708e98ad51a902944ef376b1108409d7898b0c0d6495fe97cd6d\"" Jul 2 00:44:03.179903 env[1211]: time="2024-07-02T00:44:03.179866164Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 2 00:44:03.778555 kubelet[1426]: E0702 00:44:03.778515 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:04.537721 systemd-networkd[1037]: lxc6bff45457fe0: Gained IPv6LL Jul 2 00:44:04.779262 kubelet[1426]: E0702 00:44:04.779206 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:05.203866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760807676.mount: Deactivated successfully. Jul 2 00:44:05.780239 kubelet[1426]: E0702 00:44:05.780202 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:06.781191 kubelet[1426]: E0702 00:44:06.781135 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:07.367269 env[1211]: time="2024-07-02T00:44:07.367213995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:07.368597 env[1211]: time="2024-07-02T00:44:07.368567616Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:07.370316 env[1211]: time="2024-07-02T00:44:07.370287683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:07.375819 env[1211]: time="2024-07-02T00:44:07.375785367Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:07.376540 env[1211]: time="2024-07-02T00:44:07.376510618Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 2 00:44:07.378665 env[1211]: time="2024-07-02T00:44:07.378636211Z" level=info msg="CreateContainer within sandbox \"594df11edd0e708e98ad51a902944ef376b1108409d7898b0c0d6495fe97cd6d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 2 00:44:07.390001 env[1211]: time="2024-07-02T00:44:07.389951465Z" level=info msg="CreateContainer within sandbox \"594df11edd0e708e98ad51a902944ef376b1108409d7898b0c0d6495fe97cd6d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"330b4e39f7c110c0888693bf912bfb3a771557f9a4839488e561d56dc24c5c8d\"" Jul 2 00:44:07.390475 env[1211]: time="2024-07-02T00:44:07.390451232Z" level=info msg="StartContainer for \"330b4e39f7c110c0888693bf912bfb3a771557f9a4839488e561d56dc24c5c8d\"" Jul 2 00:44:07.406344 systemd[1]: Started cri-containerd-330b4e39f7c110c0888693bf912bfb3a771557f9a4839488e561d56dc24c5c8d.scope. Jul 2 00:44:07.476040 env[1211]: time="2024-07-02T00:44:07.475998627Z" level=info msg="StartContainer for \"330b4e39f7c110c0888693bf912bfb3a771557f9a4839488e561d56dc24c5c8d\" returns successfully" Jul 2 00:44:07.781616 kubelet[1426]: E0702 00:44:07.781582 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:08.020751 kubelet[1426]: I0702 00:44:08.020634 1426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.823309075 podStartE2EDuration="6.02058666s" podCreationTimestamp="2024-07-02 00:44:02 +0000 UTC" firstStartedPulling="2024-07-02 00:44:03.179559758 +0000 UTC m=+30.271464328" lastFinishedPulling="2024-07-02 00:44:07.376837383 +0000 UTC m=+34.468741913" observedRunningTime="2024-07-02 00:44:08.020357017 +0000 UTC m=+35.112261587" watchObservedRunningTime="2024-07-02 00:44:08.02058666 +0000 UTC m=+35.112491230" Jul 2 00:44:08.782612 kubelet[1426]: E0702 00:44:08.782563 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:09.783233 kubelet[1426]: E0702 00:44:09.783184 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:10.474328 update_engine[1205]: I0702 00:44:10.469396 1205 update_attempter.cc:509] Updating boot flags... Jul 2 00:44:10.783634 kubelet[1426]: E0702 00:44:10.783526 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:11.783695 kubelet[1426]: E0702 00:44:11.783654 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:12.784303 kubelet[1426]: E0702 00:44:12.784261 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:13.756134 kubelet[1426]: E0702 00:44:13.756091 1426 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:13.785268 kubelet[1426]: E0702 00:44:13.785232 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:14.785574 kubelet[1426]: E0702 00:44:14.785533 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:15.785939 kubelet[1426]: E0702 00:44:15.785897 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:16.786756 kubelet[1426]: E0702 00:44:16.786719 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:17.467091 kubelet[1426]: I0702 00:44:17.467049 1426 topology_manager.go:215] "Topology Admit Handler" podUID="ce088a4e-4b63-444c-adea-8d534b1292c9" podNamespace="default" podName="test-pod-1" Jul 2 00:44:17.475831 systemd[1]: Created slice kubepods-besteffort-podce088a4e_4b63_444c_adea_8d534b1292c9.slice. Jul 2 00:44:17.512065 kubelet[1426]: I0702 00:44:17.512036 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a88f8054-94f2-45f6-9b84-216f822b37ab\" (UniqueName: \"kubernetes.io/nfs/ce088a4e-4b63-444c-adea-8d534b1292c9-pvc-a88f8054-94f2-45f6-9b84-216f822b37ab\") pod \"test-pod-1\" (UID: \"ce088a4e-4b63-444c-adea-8d534b1292c9\") " pod="default/test-pod-1" Jul 2 00:44:17.512065 kubelet[1426]: I0702 00:44:17.512077 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-998mx\" (UniqueName: \"kubernetes.io/projected/ce088a4e-4b63-444c-adea-8d534b1292c9-kube-api-access-998mx\") pod \"test-pod-1\" (UID: \"ce088a4e-4b63-444c-adea-8d534b1292c9\") " pod="default/test-pod-1" Jul 2 00:44:17.647183 kernel: FS-Cache: Loaded Jul 2 00:44:17.677355 kernel: RPC: Registered named UNIX socket transport module. Jul 2 00:44:17.677473 kernel: RPC: Registered udp transport module. Jul 2 00:44:17.677514 kernel: RPC: Registered tcp transport module. Jul 2 00:44:17.677538 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 2 00:44:17.718184 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 2 00:44:17.786841 kubelet[1426]: E0702 00:44:17.786804 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:17.846500 kernel: NFS: Registering the id_resolver key type Jul 2 00:44:17.846600 kernel: Key type id_resolver registered Jul 2 00:44:17.847175 kernel: Key type id_legacy registered Jul 2 00:44:17.882420 nfsidmap[2763]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 2 00:44:17.889127 nfsidmap[2766]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 2 00:44:18.083111 env[1211]: time="2024-07-02T00:44:18.081181793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ce088a4e-4b63-444c-adea-8d534b1292c9,Namespace:default,Attempt:0,}" Jul 2 00:44:18.124020 systemd-networkd[1037]: lxce6ae999a5353: Link UP Jul 2 00:44:18.132625 kernel: eth0: renamed from tmpbe822 Jul 2 00:44:18.143600 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 00:44:18.143682 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce6ae999a5353: link becomes ready Jul 2 00:44:18.144276 systemd-networkd[1037]: lxce6ae999a5353: Gained carrier Jul 2 00:44:18.429011 env[1211]: time="2024-07-02T00:44:18.428836501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:18.429011 env[1211]: time="2024-07-02T00:44:18.428886221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:18.429011 env[1211]: time="2024-07-02T00:44:18.428904861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:18.429948 env[1211]: time="2024-07-02T00:44:18.429065703Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be82248fb0940e4f6049ee6d98a639a75538a1ad79b43cc8cedaa4f4c740a1f8 pid=2800 runtime=io.containerd.runc.v2 Jul 2 00:44:18.448475 systemd[1]: Started cri-containerd-be82248fb0940e4f6049ee6d98a639a75538a1ad79b43cc8cedaa4f4c740a1f8.scope. Jul 2 00:44:18.479543 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:44:18.496135 env[1211]: time="2024-07-02T00:44:18.496094609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ce088a4e-4b63-444c-adea-8d534b1292c9,Namespace:default,Attempt:0,} returns sandbox id \"be82248fb0940e4f6049ee6d98a639a75538a1ad79b43cc8cedaa4f4c740a1f8\"" Jul 2 00:44:18.500377 env[1211]: time="2024-07-02T00:44:18.500332121Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 00:44:18.726321 env[1211]: time="2024-07-02T00:44:18.726207988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:18.727536 env[1211]: time="2024-07-02T00:44:18.727499478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:2d3caadc252cc3b24921aae8c484cb83879b0b39cb20bb8d23a3a54872427653,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:18.729760 env[1211]: time="2024-07-02T00:44:18.729733455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:18.731326 env[1211]: time="2024-07-02T00:44:18.731296547Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:18.732209 env[1211]: time="2024-07-02T00:44:18.732172993Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:2d3caadc252cc3b24921aae8c484cb83879b0b39cb20bb8d23a3a54872427653\"" Jul 2 00:44:18.734527 env[1211]: time="2024-07-02T00:44:18.734490891Z" level=info msg="CreateContainer within sandbox \"be82248fb0940e4f6049ee6d98a639a75538a1ad79b43cc8cedaa4f4c740a1f8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 2 00:44:18.745299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877623971.mount: Deactivated successfully. Jul 2 00:44:18.747374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2865278196.mount: Deactivated successfully. Jul 2 00:44:18.748109 env[1211]: time="2024-07-02T00:44:18.748023833Z" level=info msg="CreateContainer within sandbox \"be82248fb0940e4f6049ee6d98a639a75538a1ad79b43cc8cedaa4f4c740a1f8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9cc2228bc4fdb668c644956a4783ca1203d6ff25704e9c1813145d5744f66d04\"" Jul 2 00:44:18.749346 env[1211]: time="2024-07-02T00:44:18.749317283Z" level=info msg="StartContainer for \"9cc2228bc4fdb668c644956a4783ca1203d6ff25704e9c1813145d5744f66d04\"" Jul 2 00:44:18.765682 systemd[1]: Started cri-containerd-9cc2228bc4fdb668c644956a4783ca1203d6ff25704e9c1813145d5744f66d04.scope. Jul 2 00:44:18.787000 kubelet[1426]: E0702 00:44:18.786960 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:18.807175 env[1211]: time="2024-07-02T00:44:18.806463795Z" level=info msg="StartContainer for \"9cc2228bc4fdb668c644956a4783ca1203d6ff25704e9c1813145d5744f66d04\" returns successfully" Jul 2 00:44:19.041623 kubelet[1426]: I0702 00:44:19.041510 1426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.807259148 podStartE2EDuration="17.041470438s" podCreationTimestamp="2024-07-02 00:44:02 +0000 UTC" firstStartedPulling="2024-07-02 00:44:18.498183105 +0000 UTC m=+45.590087635" lastFinishedPulling="2024-07-02 00:44:18.732394355 +0000 UTC m=+45.824298925" observedRunningTime="2024-07-02 00:44:19.040993515 +0000 UTC m=+46.132898085" watchObservedRunningTime="2024-07-02 00:44:19.041470438 +0000 UTC m=+46.133374968" Jul 2 00:44:19.769413 systemd-networkd[1037]: lxce6ae999a5353: Gained IPv6LL Jul 2 00:44:19.787341 kubelet[1426]: E0702 00:44:19.787299 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:20.787791 kubelet[1426]: E0702 00:44:20.787744 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:21.788494 kubelet[1426]: E0702 00:44:21.788455 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:22.789702 kubelet[1426]: E0702 00:44:22.789653 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:23.790758 kubelet[1426]: E0702 00:44:23.790700 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:24.791804 kubelet[1426]: E0702 00:44:24.791765 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:25.791976 kubelet[1426]: E0702 00:44:25.791920 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:25.905007 systemd[1]: run-containerd-runc-k8s.io-b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536-runc.8tmMXy.mount: Deactivated successfully. Jul 2 00:44:25.956824 env[1211]: time="2024-07-02T00:44:25.956735228Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:44:25.961742 env[1211]: time="2024-07-02T00:44:25.961704492Z" level=info msg="StopContainer for \"b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536\" with timeout 2 (s)" Jul 2 00:44:25.962191 env[1211]: time="2024-07-02T00:44:25.962168174Z" level=info msg="Stop container \"b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536\" with signal terminated" Jul 2 00:44:25.969889 systemd-networkd[1037]: lxc_health: Link DOWN Jul 2 00:44:25.969898 systemd-networkd[1037]: lxc_health: Lost carrier Jul 2 00:44:26.006573 systemd[1]: cri-containerd-b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536.scope: Deactivated successfully. Jul 2 00:44:26.006900 systemd[1]: cri-containerd-b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536.scope: Consumed 6.603s CPU time. Jul 2 00:44:26.021894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536-rootfs.mount: Deactivated successfully. Jul 2 00:44:26.032882 env[1211]: time="2024-07-02T00:44:26.032836505Z" level=info msg="shim disconnected" id=b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536 Jul 2 00:44:26.033102 env[1211]: time="2024-07-02T00:44:26.033083746Z" level=warning msg="cleaning up after shim disconnected" id=b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536 namespace=k8s.io Jul 2 00:44:26.033232 env[1211]: time="2024-07-02T00:44:26.033217026Z" level=info msg="cleaning up dead shim" Jul 2 00:44:26.039744 env[1211]: time="2024-07-02T00:44:26.039702536Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2931 runtime=io.containerd.runc.v2\n" Jul 2 00:44:26.042351 env[1211]: time="2024-07-02T00:44:26.042263307Z" level=info msg="StopContainer for \"b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536\" returns successfully" Jul 2 00:44:26.043463 env[1211]: time="2024-07-02T00:44:26.043438513Z" level=info msg="StopPodSandbox for \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\"" Jul 2 00:44:26.043600 env[1211]: time="2024-07-02T00:44:26.043580153Z" level=info msg="Container to stop \"cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:26.043674 env[1211]: time="2024-07-02T00:44:26.043658514Z" level=info msg="Container to stop \"0fffac8d88a37a01f0ba8b026f466e5176b249ef0b38ac17f800c3851fcde20f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:26.043733 env[1211]: time="2024-07-02T00:44:26.043717994Z" level=info msg="Container to stop \"b1a83a92b3955ee30e7fc145d46ec211bf3e8b0bfe1936ba35de794cb55e91f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:26.043797 env[1211]: time="2024-07-02T00:44:26.043780034Z" level=info msg="Container to stop \"a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:26.043860 env[1211]: time="2024-07-02T00:44:26.043842914Z" level=info msg="Container to stop \"b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:26.046920 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469-shm.mount: Deactivated successfully. Jul 2 00:44:26.052017 systemd[1]: cri-containerd-4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469.scope: Deactivated successfully. Jul 2 00:44:26.070466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469-rootfs.mount: Deactivated successfully. Jul 2 00:44:26.073601 env[1211]: time="2024-07-02T00:44:26.073552088Z" level=info msg="shim disconnected" id=4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469 Jul 2 00:44:26.073601 env[1211]: time="2024-07-02T00:44:26.073600249Z" level=warning msg="cleaning up after shim disconnected" id=4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469 namespace=k8s.io Jul 2 00:44:26.073739 env[1211]: time="2024-07-02T00:44:26.073609249Z" level=info msg="cleaning up dead shim" Jul 2 00:44:26.081446 env[1211]: time="2024-07-02T00:44:26.081410084Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2962 runtime=io.containerd.runc.v2\n" Jul 2 00:44:26.081731 env[1211]: time="2024-07-02T00:44:26.081706765Z" level=info msg="TearDown network for sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" successfully" Jul 2 00:44:26.081779 env[1211]: time="2024-07-02T00:44:26.081731685Z" level=info msg="StopPodSandbox for \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" returns successfully" Jul 2 00:44:26.155300 kubelet[1426]: I0702 00:44:26.155235 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b707a177-ac43-427b-8cdd-752dddf1134f-hubble-tls\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155300 kubelet[1426]: I0702 00:44:26.155293 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjbsz\" (UniqueName: \"kubernetes.io/projected/b707a177-ac43-427b-8cdd-752dddf1134f-kube-api-access-tjbsz\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155482 kubelet[1426]: I0702 00:44:26.155317 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-run\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155482 kubelet[1426]: I0702 00:44:26.155335 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-etc-cni-netd\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155482 kubelet[1426]: I0702 00:44:26.155352 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-cgroup\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155482 kubelet[1426]: I0702 00:44:26.155371 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b707a177-ac43-427b-8cdd-752dddf1134f-clustermesh-secrets\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155482 kubelet[1426]: I0702 00:44:26.155395 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cni-path\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155482 kubelet[1426]: I0702 00:44:26.155412 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-lib-modules\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155630 kubelet[1426]: I0702 00:44:26.155428 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-xtables-lock\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155630 kubelet[1426]: I0702 00:44:26.155470 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-host-proc-sys-net\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155630 kubelet[1426]: I0702 00:44:26.155488 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-bpf-maps\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155630 kubelet[1426]: I0702 00:44:26.155505 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-hostproc\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155630 kubelet[1426]: I0702 00:44:26.155526 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-config-path\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155630 kubelet[1426]: I0702 00:44:26.155544 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-host-proc-sys-kernel\") pod \"b707a177-ac43-427b-8cdd-752dddf1134f\" (UID: \"b707a177-ac43-427b-8cdd-752dddf1134f\") " Jul 2 00:44:26.155766 kubelet[1426]: I0702 00:44:26.155601 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.156203 kubelet[1426]: I0702 00:44:26.155840 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cni-path" (OuterVolumeSpecName: "cni-path") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.156203 kubelet[1426]: I0702 00:44:26.155871 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.156203 kubelet[1426]: I0702 00:44:26.155881 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.156203 kubelet[1426]: I0702 00:44:26.155919 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.156203 kubelet[1426]: I0702 00:44:26.155943 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.156382 kubelet[1426]: I0702 00:44:26.155959 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.156382 kubelet[1426]: I0702 00:44:26.155974 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.156382 kubelet[1426]: I0702 00:44:26.155992 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-hostproc" (OuterVolumeSpecName: "hostproc") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.156382 kubelet[1426]: I0702 00:44:26.156038 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.158563 kubelet[1426]: I0702 00:44:26.158522 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:44:26.159766 kubelet[1426]: I0702 00:44:26.159723 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b707a177-ac43-427b-8cdd-752dddf1134f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:44:26.159766 kubelet[1426]: I0702 00:44:26.159728 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b707a177-ac43-427b-8cdd-752dddf1134f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:44:26.162475 kubelet[1426]: I0702 00:44:26.162433 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b707a177-ac43-427b-8cdd-752dddf1134f-kube-api-access-tjbsz" (OuterVolumeSpecName: "kube-api-access-tjbsz") pod "b707a177-ac43-427b-8cdd-752dddf1134f" (UID: "b707a177-ac43-427b-8cdd-752dddf1134f"). InnerVolumeSpecName "kube-api-access-tjbsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:44:26.255725 kubelet[1426]: I0702 00:44:26.255686 1426 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-run\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.255887 kubelet[1426]: I0702 00:44:26.255874 1426 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b707a177-ac43-427b-8cdd-752dddf1134f-hubble-tls\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.255997 kubelet[1426]: I0702 00:44:26.255981 1426 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tjbsz\" (UniqueName: \"kubernetes.io/projected/b707a177-ac43-427b-8cdd-752dddf1134f-kube-api-access-tjbsz\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.256078 kubelet[1426]: I0702 00:44:26.256068 1426 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b707a177-ac43-427b-8cdd-752dddf1134f-clustermesh-secrets\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.256184 kubelet[1426]: I0702 00:44:26.256172 1426 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-etc-cni-netd\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.256271 kubelet[1426]: I0702 00:44:26.256260 1426 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-cgroup\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.256380 kubelet[1426]: I0702 00:44:26.256369 1426 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-lib-modules\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.256452 kubelet[1426]: I0702 00:44:26.256443 1426 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-xtables-lock\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.256525 kubelet[1426]: I0702 00:44:26.256515 1426 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-host-proc-sys-net\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.256608 kubelet[1426]: I0702 00:44:26.256599 1426 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-bpf-maps\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.256684 kubelet[1426]: I0702 00:44:26.256675 1426 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-hostproc\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.256764 kubelet[1426]: I0702 00:44:26.256755 1426 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-cni-path\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.256840 kubelet[1426]: I0702 00:44:26.256832 1426 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b707a177-ac43-427b-8cdd-752dddf1134f-cilium-config-path\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.256921 kubelet[1426]: I0702 00:44:26.256911 1426 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b707a177-ac43-427b-8cdd-752dddf1134f-host-proc-sys-kernel\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:26.794890 kubelet[1426]: E0702 00:44:26.792232 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:26.900520 systemd[1]: var-lib-kubelet-pods-b707a177\x2dac43\x2d427b\x2d8cdd\x2d752dddf1134f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtjbsz.mount: Deactivated successfully. Jul 2 00:44:26.900618 systemd[1]: var-lib-kubelet-pods-b707a177\x2dac43\x2d427b\x2d8cdd\x2d752dddf1134f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:44:26.900675 systemd[1]: var-lib-kubelet-pods-b707a177\x2dac43\x2d427b\x2d8cdd\x2d752dddf1134f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:44:27.052807 kubelet[1426]: I0702 00:44:27.052697 1426 scope.go:117] "RemoveContainer" containerID="b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536" Jul 2 00:44:27.055088 env[1211]: time="2024-07-02T00:44:27.055035428Z" level=info msg="RemoveContainer for \"b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536\"" Jul 2 00:44:27.056381 systemd[1]: Removed slice kubepods-burstable-podb707a177_ac43_427b_8cdd_752dddf1134f.slice. Jul 2 00:44:27.056465 systemd[1]: kubepods-burstable-podb707a177_ac43_427b_8cdd_752dddf1134f.slice: Consumed 6.790s CPU time. Jul 2 00:44:27.061189 env[1211]: time="2024-07-02T00:44:27.061083813Z" level=info msg="RemoveContainer for \"b28426e024332c2f8de0f4549f34fa32de7f1c7186d7c49f1a5fe163934a4536\" returns successfully" Jul 2 00:44:27.061449 kubelet[1426]: I0702 00:44:27.061424 1426 scope.go:117] "RemoveContainer" containerID="a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7" Jul 2 00:44:27.062482 env[1211]: time="2024-07-02T00:44:27.062456059Z" level=info msg="RemoveContainer for \"a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7\"" Jul 2 00:44:27.064843 env[1211]: time="2024-07-02T00:44:27.064809789Z" level=info msg="RemoveContainer for \"a6fd81f232f94dc2c8fb16c09d15cfe2b333b1f2b9575df0ef0b60bf13cd72f7\" returns successfully" Jul 2 00:44:27.065120 kubelet[1426]: I0702 00:44:27.065097 1426 scope.go:117] "RemoveContainer" containerID="0fffac8d88a37a01f0ba8b026f466e5176b249ef0b38ac17f800c3851fcde20f" Jul 2 00:44:27.066170 env[1211]: time="2024-07-02T00:44:27.066129675Z" level=info msg="RemoveContainer for \"0fffac8d88a37a01f0ba8b026f466e5176b249ef0b38ac17f800c3851fcde20f\"" Jul 2 00:44:27.068779 env[1211]: time="2024-07-02T00:44:27.068743486Z" level=info msg="RemoveContainer for \"0fffac8d88a37a01f0ba8b026f466e5176b249ef0b38ac17f800c3851fcde20f\" returns successfully" Jul 2 00:44:27.069068 kubelet[1426]: I0702 00:44:27.069025 1426 scope.go:117] "RemoveContainer" containerID="b1a83a92b3955ee30e7fc145d46ec211bf3e8b0bfe1936ba35de794cb55e91f2" Jul 2 00:44:27.070160 env[1211]: time="2024-07-02T00:44:27.070118171Z" level=info msg="RemoveContainer for \"b1a83a92b3955ee30e7fc145d46ec211bf3e8b0bfe1936ba35de794cb55e91f2\"" Jul 2 00:44:27.072656 env[1211]: time="2024-07-02T00:44:27.072613462Z" level=info msg="RemoveContainer for \"b1a83a92b3955ee30e7fc145d46ec211bf3e8b0bfe1936ba35de794cb55e91f2\" returns successfully" Jul 2 00:44:27.072839 kubelet[1426]: I0702 00:44:27.072817 1426 scope.go:117] "RemoveContainer" containerID="cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb" Jul 2 00:44:27.073958 env[1211]: time="2024-07-02T00:44:27.073929948Z" level=info msg="RemoveContainer for \"cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb\"" Jul 2 00:44:27.076116 env[1211]: time="2024-07-02T00:44:27.076078677Z" level=info msg="RemoveContainer for \"cc2637d57a2194ba353d8840bb89dcbaf42d8b811b5e2ff30956a8846d531dcb\" returns successfully" Jul 2 00:44:27.793341 kubelet[1426]: E0702 00:44:27.793276 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:27.940107 kubelet[1426]: I0702 00:44:27.940063 1426 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b707a177-ac43-427b-8cdd-752dddf1134f" path="/var/lib/kubelet/pods/b707a177-ac43-427b-8cdd-752dddf1134f/volumes" Jul 2 00:44:28.794468 kubelet[1426]: E0702 00:44:28.794425 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:28.890409 kubelet[1426]: E0702 00:44:28.890385 1426 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:44:28.936269 kubelet[1426]: I0702 00:44:28.936220 1426 topology_manager.go:215] "Topology Admit Handler" podUID="d2cf0539-5bb9-45f8-b38a-59cc091e0d31" podNamespace="kube-system" podName="cilium-operator-5cc964979-dx6ml" Jul 2 00:44:28.936369 kubelet[1426]: E0702 00:44:28.936297 1426 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b707a177-ac43-427b-8cdd-752dddf1134f" containerName="mount-bpf-fs" Jul 2 00:44:28.936369 kubelet[1426]: E0702 00:44:28.936310 1426 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b707a177-ac43-427b-8cdd-752dddf1134f" containerName="cilium-agent" Jul 2 00:44:28.936369 kubelet[1426]: E0702 00:44:28.936326 1426 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b707a177-ac43-427b-8cdd-752dddf1134f" containerName="mount-cgroup" Jul 2 00:44:28.936369 kubelet[1426]: E0702 00:44:28.936336 1426 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b707a177-ac43-427b-8cdd-752dddf1134f" containerName="apply-sysctl-overwrites" Jul 2 00:44:28.936369 kubelet[1426]: E0702 00:44:28.936344 1426 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b707a177-ac43-427b-8cdd-752dddf1134f" containerName="clean-cilium-state" Jul 2 00:44:28.936369 kubelet[1426]: I0702 00:44:28.936366 1426 memory_manager.go:354] "RemoveStaleState removing state" podUID="b707a177-ac43-427b-8cdd-752dddf1134f" containerName="cilium-agent" Jul 2 00:44:28.940800 systemd[1]: Created slice kubepods-besteffort-podd2cf0539_5bb9_45f8_b38a_59cc091e0d31.slice. Jul 2 00:44:28.961294 kubelet[1426]: I0702 00:44:28.961247 1426 topology_manager.go:215] "Topology Admit Handler" podUID="c86e7ed5-6eef-44a8-a8bb-edaa0726781c" podNamespace="kube-system" podName="cilium-wm9zs" Jul 2 00:44:28.965705 systemd[1]: Created slice kubepods-burstable-podc86e7ed5_6eef_44a8_a8bb_edaa0726781c.slice. Jul 2 00:44:28.968250 kubelet[1426]: I0702 00:44:28.968214 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-hostproc\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968370 kubelet[1426]: I0702 00:44:28.968264 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-lib-modules\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968370 kubelet[1426]: I0702 00:44:28.968297 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-cgroup\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968370 kubelet[1426]: I0702 00:44:28.968326 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4jm\" (UniqueName: \"kubernetes.io/projected/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-kube-api-access-2g4jm\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968370 kubelet[1426]: I0702 00:44:28.968350 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-xtables-lock\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968480 kubelet[1426]: I0702 00:44:28.968373 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-hubble-tls\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968480 kubelet[1426]: I0702 00:44:28.968399 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-bpf-maps\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968480 kubelet[1426]: I0702 00:44:28.968423 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cni-path\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968480 kubelet[1426]: I0702 00:44:28.968443 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-clustermesh-secrets\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968480 kubelet[1426]: I0702 00:44:28.968467 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-config-path\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968591 kubelet[1426]: I0702 00:44:28.968494 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2cf0539-5bb9-45f8-b38a-59cc091e0d31-cilium-config-path\") pod \"cilium-operator-5cc964979-dx6ml\" (UID: \"d2cf0539-5bb9-45f8-b38a-59cc091e0d31\") " pod="kube-system/cilium-operator-5cc964979-dx6ml" Jul 2 00:44:28.968591 kubelet[1426]: I0702 00:44:28.968516 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-etc-cni-netd\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968591 kubelet[1426]: I0702 00:44:28.968540 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-host-proc-sys-net\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968591 kubelet[1426]: I0702 00:44:28.968562 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q87pz\" (UniqueName: \"kubernetes.io/projected/d2cf0539-5bb9-45f8-b38a-59cc091e0d31-kube-api-access-q87pz\") pod \"cilium-operator-5cc964979-dx6ml\" (UID: \"d2cf0539-5bb9-45f8-b38a-59cc091e0d31\") " pod="kube-system/cilium-operator-5cc964979-dx6ml" Jul 2 00:44:28.968591 kubelet[1426]: I0702 00:44:28.968585 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-ipsec-secrets\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968695 kubelet[1426]: I0702 00:44:28.968607 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-host-proc-sys-kernel\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:28.968695 kubelet[1426]: I0702 00:44:28.968632 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-run\") pod \"cilium-wm9zs\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " pod="kube-system/cilium-wm9zs" Jul 2 00:44:29.115065 kubelet[1426]: E0702 00:44:29.114964 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:29.116620 env[1211]: time="2024-07-02T00:44:29.116269032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wm9zs,Uid:c86e7ed5-6eef-44a8-a8bb-edaa0726781c,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:29.138629 env[1211]: time="2024-07-02T00:44:29.138559675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:29.138629 env[1211]: time="2024-07-02T00:44:29.138598275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:29.138629 env[1211]: time="2024-07-02T00:44:29.138608795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:29.138800 env[1211]: time="2024-07-02T00:44:29.138742155Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c pid=2991 runtime=io.containerd.runc.v2 Jul 2 00:44:29.148348 systemd[1]: Started cri-containerd-911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c.scope. Jul 2 00:44:29.187502 env[1211]: time="2024-07-02T00:44:29.187462696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wm9zs,Uid:c86e7ed5-6eef-44a8-a8bb-edaa0726781c,Namespace:kube-system,Attempt:0,} returns sandbox id \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\"" Jul 2 00:44:29.188146 kubelet[1426]: E0702 00:44:29.188112 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:29.190442 env[1211]: time="2024-07-02T00:44:29.190404707Z" level=info msg="CreateContainer within sandbox \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:44:29.201770 env[1211]: time="2024-07-02T00:44:29.201715309Z" level=info msg="CreateContainer within sandbox \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d\"" Jul 2 00:44:29.202296 env[1211]: time="2024-07-02T00:44:29.202263231Z" level=info msg="StartContainer for \"3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d\"" Jul 2 00:44:29.215535 systemd[1]: Started cri-containerd-3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d.scope. Jul 2 00:44:29.234548 systemd[1]: cri-containerd-3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d.scope: Deactivated successfully. Jul 2 00:44:29.243347 kubelet[1426]: E0702 00:44:29.243315 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:29.244054 env[1211]: time="2024-07-02T00:44:29.244018827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dx6ml,Uid:d2cf0539-5bb9-45f8-b38a-59cc091e0d31,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:29.249965 env[1211]: time="2024-07-02T00:44:29.249921489Z" level=info msg="shim disconnected" id=3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d Jul 2 00:44:29.249965 env[1211]: time="2024-07-02T00:44:29.249967409Z" level=warning msg="cleaning up after shim disconnected" id=3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d namespace=k8s.io Jul 2 00:44:29.250135 env[1211]: time="2024-07-02T00:44:29.249976329Z" level=info msg="cleaning up dead shim" Jul 2 00:44:29.257107 env[1211]: time="2024-07-02T00:44:29.257051915Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3048 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T00:44:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 00:44:29.257418 env[1211]: time="2024-07-02T00:44:29.257325476Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Jul 2 00:44:29.260244 env[1211]: time="2024-07-02T00:44:29.260196087Z" level=error msg="Failed to pipe stdout of container \"3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d\"" error="reading from a closed fifo" Jul 2 00:44:29.260327 env[1211]: time="2024-07-02T00:44:29.260227767Z" level=error msg="Failed to pipe stderr of container \"3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d\"" error="reading from a closed fifo" Jul 2 00:44:29.260848 env[1211]: time="2024-07-02T00:44:29.260696809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:29.260941 env[1211]: time="2024-07-02T00:44:29.260832689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:29.260941 env[1211]: time="2024-07-02T00:44:29.260843169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:29.261012 env[1211]: time="2024-07-02T00:44:29.260981530Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c2525347975c37edc4d57fe0b8ec0c31b2bc028e4b242102fe96f21fc270749 pid=3068 runtime=io.containerd.runc.v2 Jul 2 00:44:29.262239 env[1211]: time="2024-07-02T00:44:29.262142574Z" level=error msg="StartContainer for \"3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 00:44:29.262558 kubelet[1426]: E0702 00:44:29.262531 1426 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d" Jul 2 00:44:29.263491 kubelet[1426]: E0702 00:44:29.263469 1426 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 00:44:29.263491 kubelet[1426]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 00:44:29.263491 kubelet[1426]: rm /hostbin/cilium-mount Jul 2 00:44:29.263581 kubelet[1426]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2g4jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wm9zs_kube-system(c86e7ed5-6eef-44a8-a8bb-edaa0726781c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 00:44:29.263581 kubelet[1426]: E0702 00:44:29.263525 1426 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wm9zs" podUID="c86e7ed5-6eef-44a8-a8bb-edaa0726781c" Jul 2 00:44:29.272916 systemd[1]: Started cri-containerd-7c2525347975c37edc4d57fe0b8ec0c31b2bc028e4b242102fe96f21fc270749.scope. Jul 2 00:44:29.326582 env[1211]: time="2024-07-02T00:44:29.326461973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dx6ml,Uid:d2cf0539-5bb9-45f8-b38a-59cc091e0d31,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c2525347975c37edc4d57fe0b8ec0c31b2bc028e4b242102fe96f21fc270749\"" Jul 2 00:44:29.327282 kubelet[1426]: E0702 00:44:29.327114 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:29.328230 env[1211]: time="2024-07-02T00:44:29.328198459Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:44:29.794994 kubelet[1426]: E0702 00:44:29.794940 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:30.061925 env[1211]: time="2024-07-02T00:44:30.061712131Z" level=info msg="StopPodSandbox for \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\"" Jul 2 00:44:30.062112 env[1211]: time="2024-07-02T00:44:30.062083812Z" level=info msg="Container to stop \"3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:30.068464 systemd[1]: cri-containerd-911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c.scope: Deactivated successfully. Jul 2 00:44:30.079806 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c-shm.mount: Deactivated successfully. Jul 2 00:44:30.092858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c-rootfs.mount: Deactivated successfully. Jul 2 00:44:30.111670 env[1211]: time="2024-07-02T00:44:30.111582265Z" level=info msg="shim disconnected" id=911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c Jul 2 00:44:30.111670 env[1211]: time="2024-07-02T00:44:30.111650385Z" level=warning msg="cleaning up after shim disconnected" id=911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c namespace=k8s.io Jul 2 00:44:30.111670 env[1211]: time="2024-07-02T00:44:30.111660105Z" level=info msg="cleaning up dead shim" Jul 2 00:44:30.118071 env[1211]: time="2024-07-02T00:44:30.118015127Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3121 runtime=io.containerd.runc.v2\n" Jul 2 00:44:30.118457 env[1211]: time="2024-07-02T00:44:30.118426849Z" level=info msg="TearDown network for sandbox \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\" successfully" Jul 2 00:44:30.118507 env[1211]: time="2024-07-02T00:44:30.118458129Z" level=info msg="StopPodSandbox for \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\" returns successfully" Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.176856 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g4jm\" (UniqueName: \"kubernetes.io/projected/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-kube-api-access-2g4jm\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.176904 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-hostproc\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.176922 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-bpf-maps\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.176941 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-hubble-tls\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.176963 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-clustermesh-secrets\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.176986 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-ipsec-secrets\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.177003 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-xtables-lock\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.177033 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-run\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.177055 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-config-path\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.177073 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-host-proc-sys-net\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.177092 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cni-path\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.177116 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-host-proc-sys-kernel\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.177135 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-cgroup\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.177173 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-etc-cni-netd\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.177193 1426 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-lib-modules\") pod \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\" (UID: \"c86e7ed5-6eef-44a8-a8bb-edaa0726781c\") " Jul 2 00:44:30.179179 kubelet[1426]: I0702 00:44:30.177215 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:30.180948 kubelet[1426]: I0702 00:44:30.177308 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:30.180948 kubelet[1426]: I0702 00:44:30.177335 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:30.180948 kubelet[1426]: I0702 00:44:30.179195 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-hostproc" (OuterVolumeSpecName: "hostproc") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:30.180948 kubelet[1426]: I0702 00:44:30.179199 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:30.180948 kubelet[1426]: I0702 00:44:30.179226 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:30.180948 kubelet[1426]: I0702 00:44:30.179243 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:30.180948 kubelet[1426]: I0702 00:44:30.179257 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:30.180948 kubelet[1426]: I0702 00:44:30.179264 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:30.180948 kubelet[1426]: I0702 00:44:30.179278 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cni-path" (OuterVolumeSpecName: "cni-path") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:30.180948 kubelet[1426]: I0702 00:44:30.179713 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:44:30.180948 kubelet[1426]: I0702 00:44:30.180764 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:44:30.181398 systemd[1]: var-lib-kubelet-pods-c86e7ed5\x2d6eef\x2d44a8\x2da8bb\x2dedaa0726781c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:44:30.181494 systemd[1]: var-lib-kubelet-pods-c86e7ed5\x2d6eef\x2d44a8\x2da8bb\x2dedaa0726781c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:44:30.181805 kubelet[1426]: I0702 00:44:30.181779 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-kube-api-access-2g4jm" (OuterVolumeSpecName: "kube-api-access-2g4jm") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "kube-api-access-2g4jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:44:30.181993 kubelet[1426]: I0702 00:44:30.181968 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:44:30.182791 kubelet[1426]: I0702 00:44:30.182767 1426 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c86e7ed5-6eef-44a8-a8bb-edaa0726781c" (UID: "c86e7ed5-6eef-44a8-a8bb-edaa0726781c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:44:30.183476 systemd[1]: var-lib-kubelet-pods-c86e7ed5\x2d6eef\x2d44a8\x2da8bb\x2dedaa0726781c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2g4jm.mount: Deactivated successfully. Jul 2 00:44:30.183559 systemd[1]: var-lib-kubelet-pods-c86e7ed5\x2d6eef\x2d44a8\x2da8bb\x2dedaa0726781c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.278346 1426 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-hubble-tls\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.278517 1426 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-clustermesh-secrets\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279308 1426 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-xtables-lock\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279341 1426 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-ipsec-secrets\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279355 1426 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-config-path\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279365 1426 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-host-proc-sys-net\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279375 1426 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-run\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279387 1426 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cni-path\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279397 1426 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-host-proc-sys-kernel\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279407 1426 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-cilium-cgroup\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279420 1426 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-etc-cni-netd\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279430 1426 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-lib-modules\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279442 1426 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2g4jm\" (UniqueName: \"kubernetes.io/projected/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-kube-api-access-2g4jm\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279450 1426 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-hostproc\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.279488 kubelet[1426]: I0702 00:44:30.279460 1426 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c86e7ed5-6eef-44a8-a8bb-edaa0726781c-bpf-maps\") on node \"10.0.0.42\" DevicePath \"\"" Jul 2 00:44:30.665166 env[1211]: time="2024-07-02T00:44:30.665108833Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.666327 env[1211]: time="2024-07-02T00:44:30.666295397Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.667681 env[1211]: time="2024-07-02T00:44:30.667641362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.668187 env[1211]: time="2024-07-02T00:44:30.668149444Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 00:44:30.670324 env[1211]: time="2024-07-02T00:44:30.670295451Z" level=info msg="CreateContainer within sandbox \"7c2525347975c37edc4d57fe0b8ec0c31b2bc028e4b242102fe96f21fc270749\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:44:30.679750 env[1211]: time="2024-07-02T00:44:30.679691404Z" level=info msg="CreateContainer within sandbox \"7c2525347975c37edc4d57fe0b8ec0c31b2bc028e4b242102fe96f21fc270749\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9f76d84843777c75d7b95a853da54ece819181f0d478dbf5802ebcc35fe45c1f\"" Jul 2 00:44:30.680426 env[1211]: time="2024-07-02T00:44:30.680384567Z" level=info msg="StartContainer for \"9f76d84843777c75d7b95a853da54ece819181f0d478dbf5802ebcc35fe45c1f\"" Jul 2 00:44:30.694540 systemd[1]: Started cri-containerd-9f76d84843777c75d7b95a853da54ece819181f0d478dbf5802ebcc35fe45c1f.scope. Jul 2 00:44:30.795119 kubelet[1426]: E0702 00:44:30.795067 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:30.800365 env[1211]: time="2024-07-02T00:44:30.800313184Z" level=info msg="StartContainer for \"9f76d84843777c75d7b95a853da54ece819181f0d478dbf5802ebcc35fe45c1f\" returns successfully" Jul 2 00:44:31.064997 kubelet[1426]: E0702 00:44:31.064874 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:31.065828 kubelet[1426]: I0702 00:44:31.065788 1426 scope.go:117] "RemoveContainer" containerID="3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d" Jul 2 00:44:31.066993 env[1211]: time="2024-07-02T00:44:31.066934620Z" level=info msg="RemoveContainer for \"3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d\"" Jul 2 00:44:31.069609 systemd[1]: Removed slice kubepods-burstable-podc86e7ed5_6eef_44a8_a8bb_edaa0726781c.slice. Jul 2 00:44:31.071274 env[1211]: time="2024-07-02T00:44:31.071223954Z" level=info msg="RemoveContainer for \"3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d\" returns successfully" Jul 2 00:44:31.074549 kubelet[1426]: I0702 00:44:31.074516 1426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-dx6ml" podStartSLOduration=1.7339728970000001 podStartE2EDuration="3.074460564s" podCreationTimestamp="2024-07-02 00:44:28 +0000 UTC" firstStartedPulling="2024-07-02 00:44:29.327905498 +0000 UTC m=+56.419810068" lastFinishedPulling="2024-07-02 00:44:30.668393165 +0000 UTC m=+57.760297735" observedRunningTime="2024-07-02 00:44:31.074177403 +0000 UTC m=+58.166081973" watchObservedRunningTime="2024-07-02 00:44:31.074460564 +0000 UTC m=+58.166365134" Jul 2 00:44:31.078801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459725875.mount: Deactivated successfully. Jul 2 00:44:31.105510 kubelet[1426]: I0702 00:44:31.105456 1426 topology_manager.go:215] "Topology Admit Handler" podUID="413f212c-04ae-4c45-a076-ac5fee4d8585" podNamespace="kube-system" podName="cilium-x24sp" Jul 2 00:44:31.105654 kubelet[1426]: E0702 00:44:31.105524 1426 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c86e7ed5-6eef-44a8-a8bb-edaa0726781c" containerName="mount-cgroup" Jul 2 00:44:31.105654 kubelet[1426]: I0702 00:44:31.105550 1426 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86e7ed5-6eef-44a8-a8bb-edaa0726781c" containerName="mount-cgroup" Jul 2 00:44:31.110497 systemd[1]: Created slice kubepods-burstable-pod413f212c_04ae_4c45_a076_ac5fee4d8585.slice. Jul 2 00:44:31.184661 kubelet[1426]: I0702 00:44:31.184620 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g6vh\" (UniqueName: \"kubernetes.io/projected/413f212c-04ae-4c45-a076-ac5fee4d8585-kube-api-access-9g6vh\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.184661 kubelet[1426]: I0702 00:44:31.184667 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/413f212c-04ae-4c45-a076-ac5fee4d8585-cilium-ipsec-secrets\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184689 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/413f212c-04ae-4c45-a076-ac5fee4d8585-cilium-run\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184713 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/413f212c-04ae-4c45-a076-ac5fee4d8585-bpf-maps\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184731 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/413f212c-04ae-4c45-a076-ac5fee4d8585-cilium-cgroup\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184752 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/413f212c-04ae-4c45-a076-ac5fee4d8585-lib-modules\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184772 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/413f212c-04ae-4c45-a076-ac5fee4d8585-host-proc-sys-kernel\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184792 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/413f212c-04ae-4c45-a076-ac5fee4d8585-host-proc-sys-net\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184811 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/413f212c-04ae-4c45-a076-ac5fee4d8585-hubble-tls\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184831 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/413f212c-04ae-4c45-a076-ac5fee4d8585-cilium-config-path\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184849 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/413f212c-04ae-4c45-a076-ac5fee4d8585-hostproc\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184869 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/413f212c-04ae-4c45-a076-ac5fee4d8585-cni-path\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184890 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/413f212c-04ae-4c45-a076-ac5fee4d8585-clustermesh-secrets\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184910 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/413f212c-04ae-4c45-a076-ac5fee4d8585-etc-cni-netd\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.185028 kubelet[1426]: I0702 00:44:31.184932 1426 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/413f212c-04ae-4c45-a076-ac5fee4d8585-xtables-lock\") pod \"cilium-x24sp\" (UID: \"413f212c-04ae-4c45-a076-ac5fee4d8585\") " pod="kube-system/cilium-x24sp" Jul 2 00:44:31.418612 kubelet[1426]: E0702 00:44:31.418566 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:31.419111 env[1211]: time="2024-07-02T00:44:31.419069170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x24sp,Uid:413f212c-04ae-4c45-a076-ac5fee4d8585,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:31.432701 env[1211]: time="2024-07-02T00:44:31.432630654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:31.432819 env[1211]: time="2024-07-02T00:44:31.432714854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:31.432819 env[1211]: time="2024-07-02T00:44:31.432741374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:31.432930 env[1211]: time="2024-07-02T00:44:31.432901815Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21 pid=3188 runtime=io.containerd.runc.v2 Jul 2 00:44:31.442504 systemd[1]: Started cri-containerd-2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21.scope. Jul 2 00:44:31.485385 env[1211]: time="2024-07-02T00:44:31.485333186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x24sp,Uid:413f212c-04ae-4c45-a076-ac5fee4d8585,Namespace:kube-system,Attempt:0,} returns sandbox id \"2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21\"" Jul 2 00:44:31.486159 kubelet[1426]: E0702 00:44:31.486130 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:31.488416 env[1211]: time="2024-07-02T00:44:31.488377476Z" level=info msg="CreateContainer within sandbox \"2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:44:31.503608 env[1211]: time="2024-07-02T00:44:31.503550766Z" level=info msg="CreateContainer within sandbox \"2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b6432d67fdacb3886dbf58da848b6b0d279b6bb95f4fa6a941b05c49b06acf74\"" Jul 2 00:44:31.504314 env[1211]: time="2024-07-02T00:44:31.504274008Z" level=info msg="StartContainer for \"b6432d67fdacb3886dbf58da848b6b0d279b6bb95f4fa6a941b05c49b06acf74\"" Jul 2 00:44:31.518656 systemd[1]: Started cri-containerd-b6432d67fdacb3886dbf58da848b6b0d279b6bb95f4fa6a941b05c49b06acf74.scope. Jul 2 00:44:31.564376 env[1211]: time="2024-07-02T00:44:31.564326804Z" level=info msg="StartContainer for \"b6432d67fdacb3886dbf58da848b6b0d279b6bb95f4fa6a941b05c49b06acf74\" returns successfully" Jul 2 00:44:31.574853 systemd[1]: cri-containerd-b6432d67fdacb3886dbf58da848b6b0d279b6bb95f4fa6a941b05c49b06acf74.scope: Deactivated successfully. Jul 2 00:44:31.600312 env[1211]: time="2024-07-02T00:44:31.600258521Z" level=info msg="shim disconnected" id=b6432d67fdacb3886dbf58da848b6b0d279b6bb95f4fa6a941b05c49b06acf74 Jul 2 00:44:31.600312 env[1211]: time="2024-07-02T00:44:31.600311962Z" level=warning msg="cleaning up after shim disconnected" id=b6432d67fdacb3886dbf58da848b6b0d279b6bb95f4fa6a941b05c49b06acf74 namespace=k8s.io Jul 2 00:44:31.600571 env[1211]: time="2024-07-02T00:44:31.600326042Z" level=info msg="cleaning up dead shim" Jul 2 00:44:31.607325 env[1211]: time="2024-07-02T00:44:31.607281664Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3269 runtime=io.containerd.runc.v2\n" Jul 2 00:44:31.796032 kubelet[1426]: E0702 00:44:31.795929 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:31.940533 kubelet[1426]: I0702 00:44:31.940247 1426 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c86e7ed5-6eef-44a8-a8bb-edaa0726781c" path="/var/lib/kubelet/pods/c86e7ed5-6eef-44a8-a8bb-edaa0726781c/volumes" Jul 2 00:44:32.069189 kubelet[1426]: E0702 00:44:32.068952 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:32.070667 env[1211]: time="2024-07-02T00:44:32.070559408Z" level=info msg="CreateContainer within sandbox \"2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:44:32.071041 kubelet[1426]: E0702 00:44:32.071022 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:32.084748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount391569267.mount: Deactivated successfully. Jul 2 00:44:32.090209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount571605845.mount: Deactivated successfully. Jul 2 00:44:32.094202 env[1211]: time="2024-07-02T00:44:32.094164721Z" level=info msg="CreateContainer within sandbox \"2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"535fe4683c582c991ce33a38770f1d30fec979c3b241e410ac72c679aabe2e7a\"" Jul 2 00:44:32.094776 env[1211]: time="2024-07-02T00:44:32.094751362Z" level=info msg="StartContainer for \"535fe4683c582c991ce33a38770f1d30fec979c3b241e410ac72c679aabe2e7a\"" Jul 2 00:44:32.107658 systemd[1]: Started cri-containerd-535fe4683c582c991ce33a38770f1d30fec979c3b241e410ac72c679aabe2e7a.scope. Jul 2 00:44:32.144366 env[1211]: time="2024-07-02T00:44:32.144307234Z" level=info msg="StartContainer for \"535fe4683c582c991ce33a38770f1d30fec979c3b241e410ac72c679aabe2e7a\" returns successfully" Jul 2 00:44:32.149698 systemd[1]: cri-containerd-535fe4683c582c991ce33a38770f1d30fec979c3b241e410ac72c679aabe2e7a.scope: Deactivated successfully. Jul 2 00:44:32.173510 env[1211]: time="2024-07-02T00:44:32.173411363Z" level=info msg="shim disconnected" id=535fe4683c582c991ce33a38770f1d30fec979c3b241e410ac72c679aabe2e7a Jul 2 00:44:32.173510 env[1211]: time="2024-07-02T00:44:32.173499123Z" level=warning msg="cleaning up after shim disconnected" id=535fe4683c582c991ce33a38770f1d30fec979c3b241e410ac72c679aabe2e7a namespace=k8s.io Jul 2 00:44:32.173510 env[1211]: time="2024-07-02T00:44:32.173509803Z" level=info msg="cleaning up dead shim" Jul 2 00:44:32.180640 env[1211]: time="2024-07-02T00:44:32.180582705Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3331 runtime=io.containerd.runc.v2\n" Jul 2 00:44:32.355905 kubelet[1426]: W0702 00:44:32.355710 1426 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86e7ed5_6eef_44a8_a8bb_edaa0726781c.slice/cri-containerd-3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d.scope WatchSource:0}: container "3c8d53f0a3b44c8662c2b38593db004d4eacb9b0a7ed43a4ae8423cc4714230d" in namespace "k8s.io": not found Jul 2 00:44:32.796739 kubelet[1426]: E0702 00:44:32.796706 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:33.073698 kubelet[1426]: E0702 00:44:33.073613 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:33.075719 env[1211]: time="2024-07-02T00:44:33.075681272Z" level=info msg="CreateContainer within sandbox \"2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:44:33.099411 env[1211]: time="2024-07-02T00:44:33.099347260Z" level=info msg="CreateContainer within sandbox \"2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5c1bd200872c2fcd218b67ed26e0dd5f1ced150f399491df0be53424b8f1bd2f\"" Jul 2 00:44:33.100028 env[1211]: time="2024-07-02T00:44:33.100002261Z" level=info msg="StartContainer for \"5c1bd200872c2fcd218b67ed26e0dd5f1ced150f399491df0be53424b8f1bd2f\"" Jul 2 00:44:33.118353 systemd[1]: Started cri-containerd-5c1bd200872c2fcd218b67ed26e0dd5f1ced150f399491df0be53424b8f1bd2f.scope. Jul 2 00:44:33.154458 env[1211]: time="2024-07-02T00:44:33.154398218Z" level=info msg="StartContainer for \"5c1bd200872c2fcd218b67ed26e0dd5f1ced150f399491df0be53424b8f1bd2f\" returns successfully" Jul 2 00:44:33.157744 systemd[1]: cri-containerd-5c1bd200872c2fcd218b67ed26e0dd5f1ced150f399491df0be53424b8f1bd2f.scope: Deactivated successfully. Jul 2 00:44:33.183361 env[1211]: time="2024-07-02T00:44:33.183311421Z" level=info msg="shim disconnected" id=5c1bd200872c2fcd218b67ed26e0dd5f1ced150f399491df0be53424b8f1bd2f Jul 2 00:44:33.183592 env[1211]: time="2024-07-02T00:44:33.183572941Z" level=warning msg="cleaning up after shim disconnected" id=5c1bd200872c2fcd218b67ed26e0dd5f1ced150f399491df0be53424b8f1bd2f namespace=k8s.io Jul 2 00:44:33.183649 env[1211]: time="2024-07-02T00:44:33.183637342Z" level=info msg="cleaning up dead shim" Jul 2 00:44:33.190506 env[1211]: time="2024-07-02T00:44:33.190467361Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3391 runtime=io.containerd.runc.v2\n" Jul 2 00:44:33.756588 kubelet[1426]: E0702 00:44:33.756547 1426 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:33.775854 env[1211]: time="2024-07-02T00:44:33.775805841Z" level=info msg="StopPodSandbox for \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\"" Jul 2 00:44:33.775991 env[1211]: time="2024-07-02T00:44:33.775908802Z" level=info msg="TearDown network for sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" successfully" Jul 2 00:44:33.775991 env[1211]: time="2024-07-02T00:44:33.775951162Z" level=info msg="StopPodSandbox for \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" returns successfully" Jul 2 00:44:33.776341 env[1211]: time="2024-07-02T00:44:33.776311723Z" level=info msg="RemovePodSandbox for \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\"" Jul 2 00:44:33.776453 env[1211]: time="2024-07-02T00:44:33.776417723Z" level=info msg="Forcibly stopping sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\"" Jul 2 00:44:33.776561 env[1211]: time="2024-07-02T00:44:33.776542964Z" level=info msg="TearDown network for sandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" successfully" Jul 2 00:44:33.797471 kubelet[1426]: E0702 00:44:33.797422 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:33.801683 env[1211]: time="2024-07-02T00:44:33.801638476Z" level=info msg="RemovePodSandbox \"4d1baa32c24f4e5ac4beb043b553c828f8eebb377ef1fbe1b5fc6e81134b2469\" returns successfully" Jul 2 00:44:33.802356 env[1211]: time="2024-07-02T00:44:33.802325278Z" level=info msg="StopPodSandbox for \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\"" Jul 2 00:44:33.802548 env[1211]: time="2024-07-02T00:44:33.802506518Z" level=info msg="TearDown network for sandbox \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\" successfully" Jul 2 00:44:33.802613 env[1211]: time="2024-07-02T00:44:33.802598358Z" level=info msg="StopPodSandbox for \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\" returns successfully" Jul 2 00:44:33.802989 env[1211]: time="2024-07-02T00:44:33.802960199Z" level=info msg="RemovePodSandbox for \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\"" Jul 2 00:44:33.803056 env[1211]: time="2024-07-02T00:44:33.803027080Z" level=info msg="Forcibly stopping sandbox \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\"" Jul 2 00:44:33.803125 env[1211]: time="2024-07-02T00:44:33.803107600Z" level=info msg="TearDown network for sandbox \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\" successfully" Jul 2 00:44:33.805600 env[1211]: time="2024-07-02T00:44:33.805574087Z" level=info msg="RemovePodSandbox \"911e35b7c19739b4ae45790aab7ea0a3a7388ebcec6983b5c5458bc1c176319c\" returns successfully" Jul 2 00:44:33.891165 kubelet[1426]: E0702 00:44:33.891135 1426 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:44:34.077660 kubelet[1426]: E0702 00:44:34.077569 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:34.078341 systemd[1]: run-containerd-runc-k8s.io-5c1bd200872c2fcd218b67ed26e0dd5f1ced150f399491df0be53424b8f1bd2f-runc.YA9zXI.mount: Deactivated successfully. Jul 2 00:44:34.078433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c1bd200872c2fcd218b67ed26e0dd5f1ced150f399491df0be53424b8f1bd2f-rootfs.mount: Deactivated successfully. Jul 2 00:44:34.080914 env[1211]: time="2024-07-02T00:44:34.080621262Z" level=info msg="CreateContainer within sandbox \"2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:44:34.096148 env[1211]: time="2024-07-02T00:44:34.096087344Z" level=info msg="CreateContainer within sandbox \"2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1189d9321907da7abf8c68d13c1633fbcd1d58da57c338c5ddb78040491ba7d0\"" Jul 2 00:44:34.096691 env[1211]: time="2024-07-02T00:44:34.096661425Z" level=info msg="StartContainer for \"1189d9321907da7abf8c68d13c1633fbcd1d58da57c338c5ddb78040491ba7d0\"" Jul 2 00:44:34.115358 systemd[1]: Started cri-containerd-1189d9321907da7abf8c68d13c1633fbcd1d58da57c338c5ddb78040491ba7d0.scope. Jul 2 00:44:34.144973 systemd[1]: cri-containerd-1189d9321907da7abf8c68d13c1633fbcd1d58da57c338c5ddb78040491ba7d0.scope: Deactivated successfully. Jul 2 00:44:34.146044 env[1211]: time="2024-07-02T00:44:34.146003238Z" level=info msg="StartContainer for \"1189d9321907da7abf8c68d13c1633fbcd1d58da57c338c5ddb78040491ba7d0\" returns successfully" Jul 2 00:44:34.164757 env[1211]: time="2024-07-02T00:44:34.164698768Z" level=info msg="shim disconnected" id=1189d9321907da7abf8c68d13c1633fbcd1d58da57c338c5ddb78040491ba7d0 Jul 2 00:44:34.164757 env[1211]: time="2024-07-02T00:44:34.164746249Z" level=warning msg="cleaning up after shim disconnected" id=1189d9321907da7abf8c68d13c1633fbcd1d58da57c338c5ddb78040491ba7d0 namespace=k8s.io Jul 2 00:44:34.164757 env[1211]: time="2024-07-02T00:44:34.164755969Z" level=info msg="cleaning up dead shim" Jul 2 00:44:34.172125 env[1211]: time="2024-07-02T00:44:34.172075108Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3449 runtime=io.containerd.runc.v2\n" Jul 2 00:44:34.798387 kubelet[1426]: E0702 00:44:34.798312 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:35.078616 systemd[1]: run-containerd-runc-k8s.io-1189d9321907da7abf8c68d13c1633fbcd1d58da57c338c5ddb78040491ba7d0-runc.f8cvUg.mount: Deactivated successfully. Jul 2 00:44:35.078712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1189d9321907da7abf8c68d13c1633fbcd1d58da57c338c5ddb78040491ba7d0-rootfs.mount: Deactivated successfully. Jul 2 00:44:35.081910 kubelet[1426]: E0702 00:44:35.081867 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:35.084870 env[1211]: time="2024-07-02T00:44:35.084824191Z" level=info msg="CreateContainer within sandbox \"2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:44:35.105126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2245310405.mount: Deactivated successfully. Jul 2 00:44:35.108976 env[1211]: time="2024-07-02T00:44:35.108928491Z" level=info msg="CreateContainer within sandbox \"2019ba6b1dac65afab6df73b72d7cc0f8e0cf4a3d83a231418c0b59294edce21\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aa283f0ac148bb4b22e4de597533ab904373a2f014103e2ba94f41fb01e7130d\"" Jul 2 00:44:35.109700 env[1211]: time="2024-07-02T00:44:35.109643213Z" level=info msg="StartContainer for \"aa283f0ac148bb4b22e4de597533ab904373a2f014103e2ba94f41fb01e7130d\"" Jul 2 00:44:35.129641 systemd[1]: Started cri-containerd-aa283f0ac148bb4b22e4de597533ab904373a2f014103e2ba94f41fb01e7130d.scope. Jul 2 00:44:35.131542 kubelet[1426]: I0702 00:44:35.129586 1426 setters.go:568] "Node became not ready" node="10.0.0.42" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:44:35Z","lastTransitionTime":"2024-07-02T00:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:44:35.167140 env[1211]: time="2024-07-02T00:44:35.167090838Z" level=info msg="StartContainer for \"aa283f0ac148bb4b22e4de597533ab904373a2f014103e2ba94f41fb01e7130d\" returns successfully" Jul 2 00:44:35.431182 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 2 00:44:35.468053 kubelet[1426]: W0702 00:44:35.467972 1426 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod413f212c_04ae_4c45_a076_ac5fee4d8585.slice/cri-containerd-b6432d67fdacb3886dbf58da848b6b0d279b6bb95f4fa6a941b05c49b06acf74.scope WatchSource:0}: task b6432d67fdacb3886dbf58da848b6b0d279b6bb95f4fa6a941b05c49b06acf74 not found: not found Jul 2 00:44:35.799520 kubelet[1426]: E0702 00:44:35.799405 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:36.086456 kubelet[1426]: E0702 00:44:36.086367 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:36.100719 kubelet[1426]: I0702 00:44:36.100680 1426 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x24sp" podStartSLOduration=5.100638019 podStartE2EDuration="5.100638019s" podCreationTimestamp="2024-07-02 00:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:44:36.100389818 +0000 UTC m=+63.192294388" watchObservedRunningTime="2024-07-02 00:44:36.100638019 +0000 UTC m=+63.192542589" Jul 2 00:44:36.799923 kubelet[1426]: E0702 00:44:36.799861 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:37.420344 kubelet[1426]: E0702 00:44:37.420283 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:37.800347 kubelet[1426]: E0702 00:44:37.800217 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:38.092398 systemd-networkd[1037]: lxc_health: Link UP Jul 2 00:44:38.099662 systemd-networkd[1037]: lxc_health: Gained carrier Jul 2 00:44:38.103765 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:44:38.582192 kubelet[1426]: W0702 00:44:38.581895 1426 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod413f212c_04ae_4c45_a076_ac5fee4d8585.slice/cri-containerd-535fe4683c582c991ce33a38770f1d30fec979c3b241e410ac72c679aabe2e7a.scope WatchSource:0}: task 535fe4683c582c991ce33a38770f1d30fec979c3b241e410ac72c679aabe2e7a not found: not found Jul 2 00:44:38.800613 kubelet[1426]: E0702 00:44:38.800558 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:39.353624 systemd-networkd[1037]: lxc_health: Gained IPv6LL Jul 2 00:44:39.420702 kubelet[1426]: E0702 00:44:39.420655 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:39.801486 kubelet[1426]: E0702 00:44:39.801441 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:40.092446 kubelet[1426]: E0702 00:44:40.092343 1426 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:40.802212 kubelet[1426]: E0702 00:44:40.802146 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:41.688756 kubelet[1426]: W0702 00:44:41.688718 1426 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod413f212c_04ae_4c45_a076_ac5fee4d8585.slice/cri-containerd-5c1bd200872c2fcd218b67ed26e0dd5f1ced150f399491df0be53424b8f1bd2f.scope WatchSource:0}: task 5c1bd200872c2fcd218b67ed26e0dd5f1ced150f399491df0be53424b8f1bd2f not found: not found Jul 2 00:44:41.802646 kubelet[1426]: E0702 00:44:41.802615 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:42.803597 kubelet[1426]: E0702 00:44:42.803561 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:43.804870 kubelet[1426]: E0702 00:44:43.804821 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:43.952930 systemd[1]: run-containerd-runc-k8s.io-aa283f0ac148bb4b22e4de597533ab904373a2f014103e2ba94f41fb01e7130d-runc.HsIXVK.mount: Deactivated successfully. Jul 2 00:44:44.799101 kubelet[1426]: W0702 00:44:44.799064 1426 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod413f212c_04ae_4c45_a076_ac5fee4d8585.slice/cri-containerd-1189d9321907da7abf8c68d13c1633fbcd1d58da57c338c5ddb78040491ba7d0.scope WatchSource:0}: task 1189d9321907da7abf8c68d13c1633fbcd1d58da57c338c5ddb78040491ba7d0 not found: not found Jul 2 00:44:44.805490 kubelet[1426]: E0702 00:44:44.805469 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:44:45.806210 kubelet[1426]: E0702 00:44:45.806146 1426 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"