Jul 2 00:54:34.722821 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 00:54:34.722840 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 1 23:37:37 -00 2024 Jul 2 00:54:34.722847 kernel: efi: EFI v2.70 by EDK II Jul 2 00:54:34.722853 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 2 00:54:34.722858 kernel: random: crng init done Jul 2 00:54:34.722864 kernel: ACPI: Early table checksum verification disabled Jul 2 00:54:34.722870 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 2 00:54:34.722876 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 00:54:34.722891 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:54:34.722897 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:54:34.722902 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:54:34.722908 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:54:34.722913 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:54:34.722918 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:54:34.722949 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:54:34.722955 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:54:34.722960 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:54:34.722966 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 00:54:34.722972 kernel: NUMA: Failed to initialise from firmware Jul 2 00:54:34.722978 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:54:34.722983 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] Jul 2 00:54:34.722989 kernel: Zone ranges: Jul 2 00:54:34.722994 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:54:34.723001 kernel: DMA32 empty Jul 2 00:54:34.723007 kernel: Normal empty Jul 2 00:54:34.723012 kernel: Movable zone start for each node Jul 2 00:54:34.723018 kernel: Early memory node ranges Jul 2 00:54:34.723023 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 2 00:54:34.723029 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 2 00:54:34.723034 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 2 00:54:34.723040 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 2 00:54:34.723045 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 2 00:54:34.723051 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 2 00:54:34.723056 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 2 00:54:34.723062 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:54:34.723069 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 00:54:34.723074 kernel: psci: probing for conduit method from ACPI. Jul 2 00:54:34.723080 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 00:54:34.723085 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:54:34.723091 kernel: psci: Trusted OS migration not required Jul 2 00:54:34.723099 kernel: psci: SMC Calling Convention v1.1 Jul 2 00:54:34.723105 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 00:54:34.723112 kernel: ACPI: SRAT not present Jul 2 00:54:34.723119 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Jul 2 00:54:34.723125 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Jul 2 00:54:34.723131 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 00:54:34.723137 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:54:34.723143 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:54:34.723149 kernel: CPU features: detected: Hardware dirty bit management Jul 2 00:54:34.723154 kernel: CPU features: detected: Spectre-v4 Jul 2 00:54:34.723160 kernel: CPU features: detected: Spectre-BHB Jul 2 00:54:34.723167 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 00:54:34.723173 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 00:54:34.723180 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 00:54:34.723186 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 00:54:34.723192 kernel: Policy zone: DMA Jul 2 00:54:34.723199 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:54:34.723205 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:54:34.723211 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:54:34.723217 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:54:34.723223 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:54:34.723229 kernel: Memory: 2457460K/2572288K available (9792K kernel code, 2092K rwdata, 7572K rodata, 36352K init, 777K bss, 114828K reserved, 0K cma-reserved) Jul 2 00:54:34.723236 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:54:34.723242 kernel: trace event string verifier disabled Jul 2 00:54:34.723248 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:54:34.723255 kernel: rcu: RCU event tracing is enabled. Jul 2 00:54:34.723261 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:54:34.723267 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:54:34.723273 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:54:34.723279 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:54:34.723285 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:54:34.723291 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:54:34.723297 kernel: GICv3: 256 SPIs implemented Jul 2 00:54:34.723304 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:54:34.723316 kernel: GICv3: Distributor has no Range Selector support Jul 2 00:54:34.723330 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:54:34.723336 kernel: GICv3: 16 PPIs implemented Jul 2 00:54:34.723342 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 00:54:34.723348 kernel: ACPI: SRAT not present Jul 2 00:54:34.723353 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 00:54:34.723360 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 00:54:34.723366 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 2 00:54:34.723372 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 2 00:54:34.723378 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 2 00:54:34.723383 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:54:34.723391 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 00:54:34.723397 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 00:54:34.723403 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 00:54:34.723409 kernel: arm-pv: using stolen time PV Jul 2 00:54:34.723415 kernel: Console: colour dummy device 80x25 Jul 2 00:54:34.723421 kernel: ACPI: Core revision 20210730 Jul 2 00:54:34.723428 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 00:54:34.723434 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:54:34.723440 kernel: LSM: Security Framework initializing Jul 2 00:54:34.723446 kernel: SELinux: Initializing. Jul 2 00:54:34.723453 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:54:34.723460 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:54:34.723466 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:54:34.723472 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 00:54:34.723478 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 00:54:34.723484 kernel: Remapping and enabling EFI services. Jul 2 00:54:34.723490 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:54:34.723496 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:54:34.723502 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 00:54:34.723510 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 2 00:54:34.723516 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:54:34.723522 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 00:54:34.723528 kernel: Detected PIPT I-cache on CPU2 Jul 2 00:54:34.723567 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 00:54:34.723573 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 2 00:54:34.723579 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:54:34.723585 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 00:54:34.723592 kernel: Detected PIPT I-cache on CPU3 Jul 2 00:54:34.723598 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 00:54:34.723605 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 2 00:54:34.723611 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:54:34.723617 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 00:54:34.723624 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:54:34.723634 kernel: SMP: Total of 4 processors activated. Jul 2 00:54:34.723641 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:54:34.723648 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 00:54:34.723654 kernel: CPU features: detected: Common not Private translations Jul 2 00:54:34.723660 kernel: CPU features: detected: CRC32 instructions Jul 2 00:54:34.723667 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 00:54:34.723673 kernel: CPU features: detected: LSE atomic instructions Jul 2 00:54:34.723679 kernel: CPU features: detected: Privileged Access Never Jul 2 00:54:34.723687 kernel: CPU features: detected: RAS Extension Support Jul 2 00:54:34.723694 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 00:54:34.723700 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:54:34.723707 kernel: alternatives: patching kernel code Jul 2 00:54:34.723714 kernel: devtmpfs: initialized Jul 2 00:54:34.723721 kernel: KASLR enabled Jul 2 00:54:34.723727 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:54:34.723734 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:54:34.723740 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:54:34.723747 kernel: SMBIOS 3.0.0 present. Jul 2 00:54:34.723753 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 2 00:54:34.723760 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:54:34.723766 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:54:34.723773 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:54:34.723781 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:54:34.723787 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:54:34.723793 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Jul 2 00:54:34.723800 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:54:34.723806 kernel: cpuidle: using governor menu Jul 2 00:54:34.723813 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:54:34.723819 kernel: ASID allocator initialised with 32768 entries Jul 2 00:54:34.723826 kernel: ACPI: bus type PCI registered Jul 2 00:54:34.723832 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:54:34.723840 kernel: Serial: AMBA PL011 UART driver Jul 2 00:54:34.723847 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:54:34.723853 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:54:34.723860 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:54:34.723866 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:54:34.723872 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:54:34.723879 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:54:34.723892 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:54:34.723899 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:54:34.723907 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:54:34.723913 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:54:34.723919 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 00:54:34.723926 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 00:54:34.723932 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 00:54:34.723939 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:54:34.723945 kernel: ACPI: Interpreter enabled Jul 2 00:54:34.723951 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:54:34.723958 kernel: ACPI: MCFG table detected, 1 entries Jul 2 00:54:34.723966 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 00:54:34.723972 kernel: printk: console [ttyAMA0] enabled Jul 2 00:54:34.723978 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:54:34.724114 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:54:34.724178 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 00:54:34.724235 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 00:54:34.724292 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 00:54:34.724370 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 00:54:34.724380 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 00:54:34.724386 kernel: PCI host bridge to bus 0000:00 Jul 2 00:54:34.724450 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 00:54:34.724505 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 00:54:34.724556 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 00:54:34.724608 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:54:34.724681 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 00:54:34.724750 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:54:34.724811 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 00:54:34.724871 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 00:54:34.724937 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:54:34.724998 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:54:34.725057 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 00:54:34.725118 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 00:54:34.725171 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 00:54:34.725223 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 00:54:34.725275 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 00:54:34.725284 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 00:54:34.725291 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 00:54:34.725298 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 00:54:34.725306 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 00:54:34.725339 kernel: iommu: Default domain type: Translated Jul 2 00:54:34.725346 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:54:34.725353 kernel: vgaarb: loaded Jul 2 00:54:34.725359 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:54:34.725366 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:54:34.725372 kernel: PTP clock support registered Jul 2 00:54:34.725379 kernel: Registered efivars operations Jul 2 00:54:34.725385 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:54:34.725392 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:54:34.725400 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:54:34.725406 kernel: pnp: PnP ACPI init Jul 2 00:54:34.725480 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 00:54:34.725490 kernel: pnp: PnP ACPI: found 1 devices Jul 2 00:54:34.725497 kernel: NET: Registered PF_INET protocol family Jul 2 00:54:34.725503 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:54:34.725510 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:54:34.725517 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:54:34.725525 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:54:34.725532 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 00:54:34.725538 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:54:34.725545 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:54:34.725551 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:54:34.725558 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:54:34.725565 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:54:34.725571 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 00:54:34.725579 kernel: kvm [1]: HYP mode not available Jul 2 00:54:34.725585 kernel: Initialise system trusted keyrings Jul 2 00:54:34.725592 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:54:34.725598 kernel: Key type asymmetric registered Jul 2 00:54:34.725605 kernel: Asymmetric key parser 'x509' registered Jul 2 00:54:34.725611 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 00:54:34.725618 kernel: io scheduler mq-deadline registered Jul 2 00:54:34.725624 kernel: io scheduler kyber registered Jul 2 00:54:34.725631 kernel: io scheduler bfq registered Jul 2 00:54:34.725637 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 00:54:34.725645 kernel: ACPI: button: Power Button [PWRB] Jul 2 00:54:34.725652 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 00:54:34.725714 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 00:54:34.725723 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:54:34.725730 kernel: thunder_xcv, ver 1.0 Jul 2 00:54:34.725736 kernel: thunder_bgx, ver 1.0 Jul 2 00:54:34.725743 kernel: nicpf, ver 1.0 Jul 2 00:54:34.725749 kernel: nicvf, ver 1.0 Jul 2 00:54:34.725814 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:54:34.725873 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:54:34 UTC (1719881674) Jul 2 00:54:34.725888 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:54:34.725895 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:54:34.725901 kernel: Segment Routing with IPv6 Jul 2 00:54:34.725908 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:54:34.725914 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:54:34.725921 kernel: Key type dns_resolver registered Jul 2 00:54:34.725927 kernel: registered taskstats version 1 Jul 2 00:54:34.725935 kernel: Loading compiled-in X.509 certificates Jul 2 00:54:34.725942 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: c418313b450e4055b23e41c11cb6dc415de0265d' Jul 2 00:54:34.725948 kernel: Key type .fscrypt registered Jul 2 00:54:34.725955 kernel: Key type fscrypt-provisioning registered Jul 2 00:54:34.725961 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:54:34.725968 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:54:34.725974 kernel: ima: No architecture policies found Jul 2 00:54:34.725981 kernel: clk: Disabling unused clocks Jul 2 00:54:34.725987 kernel: Freeing unused kernel memory: 36352K Jul 2 00:54:34.725995 kernel: Run /init as init process Jul 2 00:54:34.726002 kernel: with arguments: Jul 2 00:54:34.726008 kernel: /init Jul 2 00:54:34.726014 kernel: with environment: Jul 2 00:54:34.726021 kernel: HOME=/ Jul 2 00:54:34.726027 kernel: TERM=linux Jul 2 00:54:34.726033 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:54:34.726042 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:54:34.726052 systemd[1]: Detected virtualization kvm. Jul 2 00:54:34.726060 systemd[1]: Detected architecture arm64. Jul 2 00:54:34.726066 systemd[1]: Running in initrd. Jul 2 00:54:34.726073 systemd[1]: No hostname configured, using default hostname. Jul 2 00:54:34.726080 systemd[1]: Hostname set to . Jul 2 00:54:34.726087 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:54:34.726094 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:54:34.726101 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:54:34.726109 systemd[1]: Reached target cryptsetup.target. Jul 2 00:54:34.726116 systemd[1]: Reached target paths.target. Jul 2 00:54:34.726122 systemd[1]: Reached target slices.target. Jul 2 00:54:34.726129 systemd[1]: Reached target swap.target. Jul 2 00:54:34.726136 systemd[1]: Reached target timers.target. Jul 2 00:54:34.726143 systemd[1]: Listening on iscsid.socket. Jul 2 00:54:34.726150 systemd[1]: Listening on iscsiuio.socket. Jul 2 00:54:34.726158 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 00:54:34.726165 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 00:54:34.726172 systemd[1]: Listening on systemd-journald.socket. Jul 2 00:54:34.726179 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:54:34.726186 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:54:34.726193 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:54:34.726199 systemd[1]: Reached target sockets.target. Jul 2 00:54:34.726206 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:54:34.726213 systemd[1]: Finished network-cleanup.service. Jul 2 00:54:34.726221 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:54:34.726228 systemd[1]: Starting systemd-journald.service... Jul 2 00:54:34.726235 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:54:34.726242 systemd[1]: Starting systemd-resolved.service... Jul 2 00:54:34.726249 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 00:54:34.726256 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:54:34.726263 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:54:34.726270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 00:54:34.726276 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 00:54:34.726284 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 00:54:34.726291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 00:54:34.726299 kernel: audit: type=1130 audit(1719881674.723:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.726333 systemd-journald[290]: Journal started Jul 2 00:54:34.726379 systemd-journald[290]: Runtime Journal (/run/log/journal/a45bd53f17454c60b64a07284790c7de) is 6.0M, max 48.7M, 42.6M free. Jul 2 00:54:34.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.717925 systemd-modules-load[291]: Inserted module 'overlay' Jul 2 00:54:34.727993 systemd[1]: Started systemd-journald.service. Jul 2 00:54:34.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.730322 kernel: audit: type=1130 audit(1719881674.728:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.740830 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:54:34.742259 systemd-resolved[292]: Positive Trust Anchors: Jul 2 00:54:34.742272 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:54:34.744083 kernel: Bridge firewalling registered Jul 2 00:54:34.742299 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:54:34.742464 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 2 00:54:34.746965 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 2 00:54:34.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.749183 systemd[1]: Started systemd-resolved.service. Jul 2 00:54:34.750919 systemd[1]: Reached target nss-lookup.target. Jul 2 00:54:34.753007 kernel: audit: type=1130 audit(1719881674.749:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.754013 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 00:54:34.757318 kernel: SCSI subsystem initialized Jul 2 00:54:34.757336 kernel: audit: type=1130 audit(1719881674.754:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.755880 systemd[1]: Starting dracut-cmdline.service... Jul 2 00:54:34.762878 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:54:34.762923 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:54:34.762933 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 00:54:34.764491 dracut-cmdline[309]: dracut-dracut-053 Jul 2 00:54:34.765416 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 2 00:54:34.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.767931 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:54:34.772091 kernel: audit: type=1130 audit(1719881674.766:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.766515 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:54:34.768060 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:54:34.778494 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:54:34.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.782330 kernel: audit: type=1130 audit(1719881674.779:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.824334 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:54:34.836327 kernel: iscsi: registered transport (tcp) Jul 2 00:54:34.852327 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:54:34.852342 kernel: QLogic iSCSI HBA Driver Jul 2 00:54:34.886020 systemd[1]: Finished dracut-cmdline.service. Jul 2 00:54:34.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.887494 systemd[1]: Starting dracut-pre-udev.service... Jul 2 00:54:34.889857 kernel: audit: type=1130 audit(1719881674.886:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:34.933334 kernel: raid6: neonx8 gen() 13814 MB/s Jul 2 00:54:34.950320 kernel: raid6: neonx8 xor() 10837 MB/s Jul 2 00:54:34.967366 kernel: raid6: neonx4 gen() 13549 MB/s Jul 2 00:54:34.984332 kernel: raid6: neonx4 xor() 11243 MB/s Jul 2 00:54:35.001327 kernel: raid6: neonx2 gen() 12963 MB/s Jul 2 00:54:35.018335 kernel: raid6: neonx2 xor() 10615 MB/s Jul 2 00:54:35.035334 kernel: raid6: neonx1 gen() 10551 MB/s Jul 2 00:54:35.052329 kernel: raid6: neonx1 xor() 8786 MB/s Jul 2 00:54:35.069322 kernel: raid6: int64x8 gen() 6275 MB/s Jul 2 00:54:35.086323 kernel: raid6: int64x8 xor() 3544 MB/s Jul 2 00:54:35.103324 kernel: raid6: int64x4 gen() 7204 MB/s Jul 2 00:54:35.120323 kernel: raid6: int64x4 xor() 3854 MB/s Jul 2 00:54:35.137325 kernel: raid6: int64x2 gen() 6150 MB/s Jul 2 00:54:35.154335 kernel: raid6: int64x2 xor() 3321 MB/s Jul 2 00:54:35.171327 kernel: raid6: int64x1 gen() 5040 MB/s Jul 2 00:54:35.188517 kernel: raid6: int64x1 xor() 2646 MB/s Jul 2 00:54:35.188531 kernel: raid6: using algorithm neonx8 gen() 13814 MB/s Jul 2 00:54:35.188540 kernel: raid6: .... xor() 10837 MB/s, rmw enabled Jul 2 00:54:35.188548 kernel: raid6: using neon recovery algorithm Jul 2 00:54:35.200339 kernel: xor: measuring software checksum speed Jul 2 00:54:35.200392 kernel: 8regs : 17297 MB/sec Jul 2 00:54:35.201326 kernel: 32regs : 20765 MB/sec Jul 2 00:54:35.202332 kernel: arm64_neon : 27949 MB/sec Jul 2 00:54:35.202344 kernel: xor: using function: arm64_neon (27949 MB/sec) Jul 2 00:54:35.256332 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 2 00:54:35.266700 systemd[1]: Finished dracut-pre-udev.service. Jul 2 00:54:35.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:35.269000 audit: BPF prog-id=7 op=LOAD Jul 2 00:54:35.269000 audit: BPF prog-id=8 op=LOAD Jul 2 00:54:35.270332 kernel: audit: type=1130 audit(1719881675.266:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:35.270350 kernel: audit: type=1334 audit(1719881675.269:10): prog-id=7 op=LOAD Jul 2 00:54:35.270520 systemd[1]: Starting systemd-udevd.service... Jul 2 00:54:35.282662 systemd-udevd[491]: Using default interface naming scheme 'v252'. Jul 2 00:54:35.286643 systemd[1]: Started systemd-udevd.service. Jul 2 00:54:35.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:35.290669 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 00:54:35.301621 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Jul 2 00:54:35.327768 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 00:54:35.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:35.329130 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:54:35.365465 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:54:35.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:35.405961 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:54:35.413397 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:54:35.413435 kernel: GPT:9289727 != 19775487 Jul 2 00:54:35.413444 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:54:35.413459 kernel: GPT:9289727 != 19775487 Jul 2 00:54:35.414506 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:54:35.414521 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:54:35.427999 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 00:54:35.428826 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 00:54:35.432920 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 00:54:35.436064 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 00:54:35.439346 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (534) Jul 2 00:54:35.439543 systemd[1]: Starting disk-uuid.service... Jul 2 00:54:35.445368 disk-uuid[559]: Primary Header is updated. Jul 2 00:54:35.445368 disk-uuid[559]: Secondary Entries is updated. Jul 2 00:54:35.445368 disk-uuid[559]: Secondary Header is updated. Jul 2 00:54:35.448339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:54:35.458329 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:54:35.461331 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:54:35.527189 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 00:54:36.462339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:54:36.462574 disk-uuid[560]: The operation has completed successfully. Jul 2 00:54:36.490963 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:54:36.491131 systemd[1]: Finished disk-uuid.service. Jul 2 00:54:36.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.497799 systemd[1]: Starting verity-setup.service... Jul 2 00:54:36.515336 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:54:36.538691 systemd[1]: Found device dev-mapper-usr.device. Jul 2 00:54:36.540065 systemd[1]: Mounting sysusr-usr.mount... Jul 2 00:54:36.540752 systemd[1]: Finished verity-setup.service. Jul 2 00:54:36.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.587343 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 00:54:36.587444 systemd[1]: Mounted sysusr-usr.mount. Jul 2 00:54:36.588097 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 00:54:36.588798 systemd[1]: Starting ignition-setup.service... Jul 2 00:54:36.590415 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 00:54:36.598430 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:54:36.598467 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:54:36.598477 kernel: BTRFS info (device vda6): has skinny extents Jul 2 00:54:36.605522 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:54:36.610874 systemd[1]: Finished ignition-setup.service. Jul 2 00:54:36.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.612148 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 00:54:36.674951 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 00:54:36.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.676000 audit: BPF prog-id=9 op=LOAD Jul 2 00:54:36.676901 systemd[1]: Starting systemd-networkd.service... Jul 2 00:54:36.692643 ignition[646]: Ignition 2.14.0 Jul 2 00:54:36.692654 ignition[646]: Stage: fetch-offline Jul 2 00:54:36.692695 ignition[646]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:54:36.692705 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:54:36.692862 ignition[646]: parsed url from cmdline: "" Jul 2 00:54:36.692865 ignition[646]: no config URL provided Jul 2 00:54:36.692870 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:54:36.692885 ignition[646]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:54:36.692905 ignition[646]: op(1): [started] loading QEMU firmware config module Jul 2 00:54:36.692910 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:54:36.702543 ignition[646]: op(1): [finished] loading QEMU firmware config module Jul 2 00:54:36.712940 systemd-networkd[737]: lo: Link UP Jul 2 00:54:36.712952 systemd-networkd[737]: lo: Gained carrier Jul 2 00:54:36.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.713303 systemd-networkd[737]: Enumeration completed Jul 2 00:54:36.713391 systemd[1]: Started systemd-networkd.service. Jul 2 00:54:36.713493 systemd-networkd[737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:54:36.714410 systemd-networkd[737]: eth0: Link UP Jul 2 00:54:36.714413 systemd-networkd[737]: eth0: Gained carrier Jul 2 00:54:36.714620 systemd[1]: Reached target network.target. Jul 2 00:54:36.716428 systemd[1]: Starting iscsiuio.service... Jul 2 00:54:36.725127 systemd[1]: Started iscsiuio.service. Jul 2 00:54:36.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.726777 systemd[1]: Starting iscsid.service... Jul 2 00:54:36.729949 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:54:36.729949 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 00:54:36.729949 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 00:54:36.729949 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 00:54:36.729949 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:54:36.729949 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 00:54:36.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.732669 systemd[1]: Started iscsid.service. Jul 2 00:54:36.736904 systemd[1]: Starting dracut-initqueue.service... Jul 2 00:54:36.740874 systemd-networkd[737]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:54:36.746679 systemd[1]: Finished dracut-initqueue.service. Jul 2 00:54:36.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.747486 systemd[1]: Reached target remote-fs-pre.target. Jul 2 00:54:36.748995 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:54:36.750345 systemd[1]: Reached target remote-fs.target. Jul 2 00:54:36.752277 systemd[1]: Starting dracut-pre-mount.service... Jul 2 00:54:36.759556 systemd[1]: Finished dracut-pre-mount.service. Jul 2 00:54:36.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.762081 ignition[646]: parsing config with SHA512: 878fbf4b403ed5f5200597d6b7db8c2d8948a55d73e2e5a32461c0bb4ae1a6cd6921014c9571a1caed23ffb5f09819836f0baee91dcaf65e265a09a982c90b48 Jul 2 00:54:36.769377 unknown[646]: fetched base config from "system" Jul 2 00:54:36.769388 unknown[646]: fetched user config from "qemu" Jul 2 00:54:36.769854 ignition[646]: fetch-offline: fetch-offline passed Jul 2 00:54:36.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.770801 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 00:54:36.769915 ignition[646]: Ignition finished successfully Jul 2 00:54:36.771512 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:54:36.772154 systemd[1]: Starting ignition-kargs.service... Jul 2 00:54:36.780470 ignition[758]: Ignition 2.14.0 Jul 2 00:54:36.780487 ignition[758]: Stage: kargs Jul 2 00:54:36.780570 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:54:36.780580 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:54:36.783013 systemd[1]: Finished ignition-kargs.service. Jul 2 00:54:36.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.781636 ignition[758]: kargs: kargs passed Jul 2 00:54:36.781676 ignition[758]: Ignition finished successfully Jul 2 00:54:36.784952 systemd[1]: Starting ignition-disks.service... Jul 2 00:54:36.790935 ignition[764]: Ignition 2.14.0 Jul 2 00:54:36.790944 ignition[764]: Stage: disks Jul 2 00:54:36.791029 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:54:36.791038 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:54:36.791891 ignition[764]: disks: disks passed Jul 2 00:54:36.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.793300 systemd[1]: Finished ignition-disks.service. Jul 2 00:54:36.791930 ignition[764]: Ignition finished successfully Jul 2 00:54:36.794688 systemd[1]: Reached target initrd-root-device.target. Jul 2 00:54:36.795617 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:54:36.796591 systemd[1]: Reached target local-fs.target. Jul 2 00:54:36.797648 systemd[1]: Reached target sysinit.target. Jul 2 00:54:36.798743 systemd[1]: Reached target basic.target. Jul 2 00:54:36.800424 systemd[1]: Starting systemd-fsck-root.service... Jul 2 00:54:36.810873 systemd-fsck[772]: ROOT: clean, 614/553520 files, 56019/553472 blocks Jul 2 00:54:36.815304 systemd[1]: Finished systemd-fsck-root.service. Jul 2 00:54:36.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.819468 systemd[1]: Mounting sysroot.mount... Jul 2 00:54:36.825329 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 00:54:36.825793 systemd[1]: Mounted sysroot.mount. Jul 2 00:54:36.826403 systemd[1]: Reached target initrd-root-fs.target. Jul 2 00:54:36.828201 systemd[1]: Mounting sysroot-usr.mount... Jul 2 00:54:36.828927 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 00:54:36.828963 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:54:36.828985 systemd[1]: Reached target ignition-diskful.target. Jul 2 00:54:36.830921 systemd[1]: Mounted sysroot-usr.mount. Jul 2 00:54:36.832156 systemd[1]: Starting initrd-setup-root.service... Jul 2 00:54:36.836256 initrd-setup-root[782]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:54:36.840844 initrd-setup-root[790]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:54:36.844669 initrd-setup-root[798]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:54:36.848220 initrd-setup-root[806]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:54:36.875100 systemd[1]: Finished initrd-setup-root.service. Jul 2 00:54:36.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.876481 systemd[1]: Starting ignition-mount.service... Jul 2 00:54:36.877605 systemd[1]: Starting sysroot-boot.service... Jul 2 00:54:36.881683 bash[823]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 00:54:36.890727 ignition[825]: INFO : Ignition 2.14.0 Jul 2 00:54:36.891495 ignition[825]: INFO : Stage: mount Jul 2 00:54:36.892291 ignition[825]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:54:36.892371 systemd[1]: Finished sysroot-boot.service. Jul 2 00:54:36.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:36.894036 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:54:36.895921 ignition[825]: INFO : mount: mount passed Jul 2 00:54:36.896570 ignition[825]: INFO : Ignition finished successfully Jul 2 00:54:36.897959 systemd[1]: Finished ignition-mount.service. Jul 2 00:54:36.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:37.548012 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 00:54:37.554805 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (834) Jul 2 00:54:37.554837 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:54:37.554847 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:54:37.555720 kernel: BTRFS info (device vda6): has skinny extents Jul 2 00:54:37.558453 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 00:54:37.559787 systemd[1]: Starting ignition-files.service... Jul 2 00:54:37.573080 ignition[854]: INFO : Ignition 2.14.0 Jul 2 00:54:37.573080 ignition[854]: INFO : Stage: files Jul 2 00:54:37.574641 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:54:37.574641 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:54:37.574641 ignition[854]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:54:37.579964 ignition[854]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:54:37.579964 ignition[854]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:54:37.582105 ignition[854]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:54:37.582105 ignition[854]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:54:37.584180 ignition[854]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:54:37.584180 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:54:37.584180 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:54:37.584180 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:54:37.584180 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 00:54:37.582214 unknown[854]: wrote ssh authorized keys file for user: core Jul 2 00:54:37.769016 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:54:37.822484 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:54:37.824155 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:54:37.824155 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 00:54:38.142013 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 2 00:54:38.230979 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:54:38.232388 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 00:54:38.447697 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 2 00:54:38.743670 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:54:38.743670 ignition[854]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:54:38.746528 ignition[854]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:54:38.779673 ignition[854]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:54:38.780903 ignition[854]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:54:38.780903 ignition[854]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:54:38.780903 ignition[854]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:54:38.780903 ignition[854]: INFO : files: files passed Jul 2 00:54:38.780903 ignition[854]: INFO : Ignition finished successfully Jul 2 00:54:38.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.781124 systemd[1]: Finished ignition-files.service. Jul 2 00:54:38.783760 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 00:54:38.784682 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 00:54:38.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.791393 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 2 00:54:38.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.785506 systemd[1]: Starting ignition-quench.service... Jul 2 00:54:38.793749 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:54:38.788145 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:54:38.788223 systemd[1]: Finished ignition-quench.service. Jul 2 00:54:38.790478 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 00:54:38.790774 systemd-networkd[737]: eth0: Gained IPv6LL Jul 2 00:54:38.791703 systemd[1]: Reached target ignition-complete.target. Jul 2 00:54:38.793793 systemd[1]: Starting initrd-parse-etc.service... Jul 2 00:54:38.805589 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:54:38.805675 systemd[1]: Finished initrd-parse-etc.service. Jul 2 00:54:38.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.806946 systemd[1]: Reached target initrd-fs.target. Jul 2 00:54:38.807915 systemd[1]: Reached target initrd.target. Jul 2 00:54:38.809062 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 00:54:38.809759 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 00:54:38.820169 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 00:54:38.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.821621 systemd[1]: Starting initrd-cleanup.service... Jul 2 00:54:38.830881 systemd[1]: Stopped target nss-lookup.target. Jul 2 00:54:38.831599 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 00:54:38.832740 systemd[1]: Stopped target timers.target. Jul 2 00:54:38.833732 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:54:38.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.833836 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 00:54:38.834815 systemd[1]: Stopped target initrd.target. Jul 2 00:54:38.835889 systemd[1]: Stopped target basic.target. Jul 2 00:54:38.837009 systemd[1]: Stopped target ignition-complete.target. Jul 2 00:54:38.837989 systemd[1]: Stopped target ignition-diskful.target. Jul 2 00:54:38.838991 systemd[1]: Stopped target initrd-root-device.target. Jul 2 00:54:38.840065 systemd[1]: Stopped target remote-fs.target. Jul 2 00:54:38.841089 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 00:54:38.842142 systemd[1]: Stopped target sysinit.target. Jul 2 00:54:38.843281 systemd[1]: Stopped target local-fs.target. Jul 2 00:54:38.844253 systemd[1]: Stopped target local-fs-pre.target. Jul 2 00:54:38.845224 systemd[1]: Stopped target swap.target. Jul 2 00:54:38.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.846135 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:54:38.846250 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 00:54:38.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.847258 systemd[1]: Stopped target cryptsetup.target. Jul 2 00:54:38.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.848177 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:54:38.848272 systemd[1]: Stopped dracut-initqueue.service. Jul 2 00:54:38.849456 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:54:38.849548 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 00:54:38.850758 systemd[1]: Stopped target paths.target. Jul 2 00:54:38.851680 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:54:38.855349 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 00:54:38.856505 systemd[1]: Stopped target slices.target. Jul 2 00:54:38.857515 systemd[1]: Stopped target sockets.target. Jul 2 00:54:38.858428 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:54:38.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.858536 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 00:54:38.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.859557 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:54:38.859644 systemd[1]: Stopped ignition-files.service. Jul 2 00:54:38.863415 iscsid[744]: iscsid shutting down. Jul 2 00:54:38.861762 systemd[1]: Stopping ignition-mount.service... Jul 2 00:54:38.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.862979 systemd[1]: Stopping iscsid.service... Jul 2 00:54:38.863786 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:54:38.863909 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 00:54:38.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.865517 systemd[1]: Stopping sysroot-boot.service... Jul 2 00:54:38.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.866266 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:54:38.866399 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 00:54:38.870979 ignition[894]: INFO : Ignition 2.14.0 Jul 2 00:54:38.870979 ignition[894]: INFO : Stage: umount Jul 2 00:54:38.870979 ignition[894]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:54:38.870979 ignition[894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:54:38.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.867671 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:54:38.876502 ignition[894]: INFO : umount: umount passed Jul 2 00:54:38.876502 ignition[894]: INFO : Ignition finished successfully Jul 2 00:54:38.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.867792 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 00:54:38.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.870434 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 00:54:38.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.870521 systemd[1]: Stopped iscsid.service. Jul 2 00:54:38.871957 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:54:38.872023 systemd[1]: Closed iscsid.socket. Jul 2 00:54:38.873180 systemd[1]: Stopping iscsiuio.service... Jul 2 00:54:38.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.875850 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:54:38.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.876268 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 00:54:38.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.876371 systemd[1]: Stopped iscsiuio.service. Jul 2 00:54:38.877581 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:54:38.877668 systemd[1]: Finished initrd-cleanup.service. Jul 2 00:54:38.878850 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:54:38.879278 systemd[1]: Stopped ignition-mount.service. Jul 2 00:54:38.881210 systemd[1]: Stopped target network.target. Jul 2 00:54:38.882220 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:54:38.882253 systemd[1]: Closed iscsiuio.socket. Jul 2 00:54:38.883239 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:54:38.883277 systemd[1]: Stopped ignition-disks.service. Jul 2 00:54:38.884387 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:54:38.884425 systemd[1]: Stopped ignition-kargs.service. Jul 2 00:54:38.885399 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:54:38.885489 systemd[1]: Stopped ignition-setup.service. Jul 2 00:54:38.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.886713 systemd[1]: Stopping systemd-networkd.service... Jul 2 00:54:38.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.888080 systemd[1]: Stopping systemd-resolved.service... Jul 2 00:54:38.896664 systemd-networkd[737]: eth0: DHCPv6 lease lost Jul 2 00:54:38.897083 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:54:38.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.897206 systemd[1]: Stopped systemd-resolved.service. Jul 2 00:54:38.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.898795 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:54:38.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.908000 audit: BPF prog-id=6 op=UNLOAD Jul 2 00:54:38.908000 audit: BPF prog-id=9 op=UNLOAD Jul 2 00:54:38.898886 systemd[1]: Stopped systemd-networkd.service. Jul 2 00:54:38.899962 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:54:38.899988 systemd[1]: Closed systemd-networkd.socket. Jul 2 00:54:38.901730 systemd[1]: Stopping network-cleanup.service... Jul 2 00:54:38.902559 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:54:38.902613 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 00:54:38.903856 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:54:38.903905 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:54:38.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.905711 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:54:38.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.905751 systemd[1]: Stopped systemd-modules-load.service. Jul 2 00:54:38.910246 systemd[1]: Stopping systemd-udevd.service... Jul 2 00:54:38.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.912482 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 00:54:38.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.915078 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:54:38.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.915205 systemd[1]: Stopped systemd-udevd.service. Jul 2 00:54:38.916547 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:54:38.916620 systemd[1]: Stopped network-cleanup.service. Jul 2 00:54:38.917449 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:54:38.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.917482 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 00:54:38.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.918610 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:54:38.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.918639 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 00:54:38.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:38.919304 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:54:38.919375 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 00:54:38.920804 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:54:38.920845 systemd[1]: Stopped dracut-cmdline.service. Jul 2 00:54:38.921923 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:54:38.921961 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 00:54:38.924883 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 00:54:38.926786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:54:38.926843 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 00:54:38.928500 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:54:38.939000 audit: BPF prog-id=5 op=UNLOAD Jul 2 00:54:38.939000 audit: BPF prog-id=4 op=UNLOAD Jul 2 00:54:38.939000 audit: BPF prog-id=3 op=UNLOAD Jul 2 00:54:38.928587 systemd[1]: Stopped sysroot-boot.service. Jul 2 00:54:38.929418 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:54:38.929454 systemd[1]: Stopped initrd-setup-root.service. Jul 2 00:54:38.941000 audit: BPF prog-id=8 op=UNLOAD Jul 2 00:54:38.941000 audit: BPF prog-id=7 op=UNLOAD Jul 2 00:54:38.930635 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:54:38.930704 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 00:54:38.931840 systemd[1]: Reached target initrd-switch-root.target. Jul 2 00:54:38.933598 systemd[1]: Starting initrd-switch-root.service... Jul 2 00:54:38.939412 systemd[1]: Switching root. Jul 2 00:54:38.954481 systemd-journald[290]: Journal stopped Jul 2 00:54:41.052907 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 2 00:54:41.052964 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 00:54:41.052979 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 00:54:41.052989 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 00:54:41.052999 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:54:41.053008 kernel: SELinux: policy capability open_perms=1 Jul 2 00:54:41.053017 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:54:41.053027 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:54:41.053036 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:54:41.053046 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:54:41.053060 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:54:41.053070 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:54:41.053081 systemd[1]: Successfully loaded SELinux policy in 31.482ms. Jul 2 00:54:41.053099 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.766ms. Jul 2 00:54:41.053112 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:54:41.053126 systemd[1]: Detected virtualization kvm. Jul 2 00:54:41.053137 systemd[1]: Detected architecture arm64. Jul 2 00:54:41.053147 systemd[1]: Detected first boot. Jul 2 00:54:41.053158 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:54:41.053168 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 00:54:41.053177 kernel: kauditd_printk_skb: 70 callbacks suppressed Jul 2 00:54:41.053188 kernel: audit: type=1400 audit(1719881679.270:81): avc: denied { associate } for pid=944 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 00:54:41.053199 kernel: audit: type=1300 audit(1719881679.270:81): arch=c00000b7 syscall=5 success=yes exit=0 a0=400018d6d4 a1=4000028b40 a2=4000026a40 a3=32 items=0 ppid=927 pid=944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:54:41.053211 kernel: audit: type=1327 audit(1719881679.270:81): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:54:41.053222 kernel: audit: type=1400 audit(1719881679.277:82): avc: denied { associate } for pid=944 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 00:54:41.053232 kernel: audit: type=1300 audit(1719881679.277:82): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400018d7b9 a2=1ed a3=0 items=2 ppid=927 pid=944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:54:41.053242 kernel: audit: type=1307 audit(1719881679.277:82): cwd="/" Jul 2 00:54:41.053251 kernel: audit: type=1302 audit(1719881679.277:82): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:54:41.053262 kernel: audit: type=1302 audit(1719881679.277:82): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:54:41.053273 kernel: audit: type=1327 audit(1719881679.277:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:54:41.053284 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:54:41.053296 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:54:41.053307 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:54:41.053329 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:54:41.053340 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:54:41.053351 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 00:54:41.053362 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 00:54:41.053373 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 00:54:41.053384 systemd[1]: Created slice system-getty.slice. Jul 2 00:54:41.053396 systemd[1]: Created slice system-modprobe.slice. Jul 2 00:54:41.053407 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 00:54:41.053417 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 00:54:41.053430 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 00:54:41.053441 systemd[1]: Created slice user.slice. Jul 2 00:54:41.053451 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:54:41.053461 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 00:54:41.053473 systemd[1]: Set up automount boot.automount. Jul 2 00:54:41.053483 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 00:54:41.053493 systemd[1]: Reached target integritysetup.target. Jul 2 00:54:41.053503 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:54:41.053514 systemd[1]: Reached target remote-fs.target. Jul 2 00:54:41.053524 systemd[1]: Reached target slices.target. Jul 2 00:54:41.053534 systemd[1]: Reached target swap.target. Jul 2 00:54:41.053545 systemd[1]: Reached target torcx.target. Jul 2 00:54:41.053555 systemd[1]: Reached target veritysetup.target. Jul 2 00:54:41.053566 systemd[1]: Listening on systemd-coredump.socket. Jul 2 00:54:41.053577 systemd[1]: Listening on systemd-initctl.socket. Jul 2 00:54:41.053589 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 00:54:41.053600 kernel: audit: type=1400 audit(1719881680.962:83): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:54:41.053610 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 00:54:41.053620 systemd[1]: Listening on systemd-journald.socket. Jul 2 00:54:41.053630 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:54:41.053641 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:54:41.053653 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:54:41.053664 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 00:54:41.053674 systemd[1]: Mounting dev-hugepages.mount... Jul 2 00:54:41.053684 systemd[1]: Mounting dev-mqueue.mount... Jul 2 00:54:41.053697 systemd[1]: Mounting media.mount... Jul 2 00:54:41.053707 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 00:54:41.053717 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 00:54:41.053728 systemd[1]: Mounting tmp.mount... Jul 2 00:54:41.053739 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 00:54:41.053749 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:54:41.053760 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:54:41.053771 systemd[1]: Starting modprobe@configfs.service... Jul 2 00:54:41.053782 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:54:41.053793 systemd[1]: Starting modprobe@drm.service... Jul 2 00:54:41.053803 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:54:41.053814 systemd[1]: Starting modprobe@fuse.service... Jul 2 00:54:41.053824 systemd[1]: Starting modprobe@loop.service... Jul 2 00:54:41.053835 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:54:41.053847 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 00:54:41.053858 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 00:54:41.053873 systemd[1]: Starting systemd-journald.service... Jul 2 00:54:41.053888 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:54:41.053898 systemd[1]: Starting systemd-network-generator.service... Jul 2 00:54:41.053909 systemd[1]: Starting systemd-remount-fs.service... Jul 2 00:54:41.053919 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:54:41.053929 systemd[1]: Mounted dev-hugepages.mount. Jul 2 00:54:41.053940 systemd[1]: Mounted dev-mqueue.mount. Jul 2 00:54:41.053950 systemd[1]: Mounted media.mount. Jul 2 00:54:41.053962 kernel: loop: module loaded Jul 2 00:54:41.053972 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 00:54:41.053982 kernel: fuse: init (API version 7.34) Jul 2 00:54:41.053993 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 00:54:41.054003 systemd[1]: Mounted tmp.mount. Jul 2 00:54:41.054013 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:54:41.054024 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:54:41.054035 systemd[1]: Finished modprobe@configfs.service. Jul 2 00:54:41.054046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:54:41.054058 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:54:41.054068 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:54:41.054078 systemd[1]: Finished modprobe@drm.service. Jul 2 00:54:41.054088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:54:41.054099 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:54:41.054109 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:54:41.054119 systemd[1]: Finished modprobe@fuse.service. Jul 2 00:54:41.054130 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:54:41.054141 systemd[1]: Finished modprobe@loop.service. Jul 2 00:54:41.054151 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:54:41.054161 systemd[1]: Finished systemd-network-generator.service. Jul 2 00:54:41.054175 systemd-journald[1023]: Journal started Jul 2 00:54:41.054222 systemd-journald[1023]: Runtime Journal (/run/log/journal/a45bd53f17454c60b64a07284790c7de) is 6.0M, max 48.7M, 42.6M free. Jul 2 00:54:40.962000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:54:40.962000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 00:54:41.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.047000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 00:54:41.047000 audit[1023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffff7e10300 a2=4000 a3=1 items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:54:41.047000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 00:54:41.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.055817 systemd[1]: Started systemd-journald.service. Jul 2 00:54:41.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.056969 systemd[1]: Finished systemd-remount-fs.service. Jul 2 00:54:41.058283 systemd[1]: Reached target network-pre.target. Jul 2 00:54:41.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.060038 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 00:54:41.061926 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 00:54:41.062524 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:54:41.064804 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 00:54:41.066579 systemd[1]: Starting systemd-journal-flush.service... Jul 2 00:54:41.067202 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:54:41.068424 systemd[1]: Starting systemd-random-seed.service... Jul 2 00:54:41.070003 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:54:41.076511 systemd-journald[1023]: Time spent on flushing to /var/log/journal/a45bd53f17454c60b64a07284790c7de is 11.450ms for 933 entries. Jul 2 00:54:41.076511 systemd-journald[1023]: System Journal (/var/log/journal/a45bd53f17454c60b64a07284790c7de) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:54:41.129469 systemd-journald[1023]: Received client request to flush runtime journal. Jul 2 00:54:41.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.071072 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:54:41.073059 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 00:54:41.132196 udevadm[1068]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:54:41.073896 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 00:54:41.078180 systemd[1]: Finished systemd-random-seed.service. Jul 2 00:54:41.079036 systemd[1]: Reached target first-boot-complete.target. Jul 2 00:54:41.083668 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:54:41.085484 systemd[1]: Starting systemd-udev-settle.service... Jul 2 00:54:41.097205 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:54:41.100986 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 00:54:41.102811 systemd[1]: Starting systemd-sysusers.service... Jul 2 00:54:41.125612 systemd[1]: Finished systemd-sysusers.service. Jul 2 00:54:41.127559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 00:54:41.130514 systemd[1]: Finished systemd-journal-flush.service. Jul 2 00:54:41.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.145929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 00:54:41.450351 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 00:54:41.452249 systemd[1]: Starting systemd-udevd.service... Jul 2 00:54:41.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.468384 systemd-udevd[1086]: Using default interface naming scheme 'v252'. Jul 2 00:54:41.479788 systemd[1]: Started systemd-udevd.service. Jul 2 00:54:41.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.482136 systemd[1]: Starting systemd-networkd.service... Jul 2 00:54:41.488237 systemd[1]: Starting systemd-userdbd.service... Jul 2 00:54:41.504398 systemd[1]: Found device dev-ttyAMA0.device. Jul 2 00:54:41.533126 systemd[1]: Started systemd-userdbd.service. Jul 2 00:54:41.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.553644 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 00:54:41.582853 systemd[1]: Finished systemd-udev-settle.service. Jul 2 00:54:41.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.584836 systemd[1]: Starting lvm2-activation-early.service... Jul 2 00:54:41.611930 lvm[1119]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:54:41.618134 systemd-networkd[1093]: lo: Link UP Jul 2 00:54:41.618144 systemd-networkd[1093]: lo: Gained carrier Jul 2 00:54:41.618532 systemd-networkd[1093]: Enumeration completed Jul 2 00:54:41.618645 systemd-networkd[1093]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:54:41.618691 systemd[1]: Started systemd-networkd.service. Jul 2 00:54:41.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.620297 systemd-networkd[1093]: eth0: Link UP Jul 2 00:54:41.620306 systemd-networkd[1093]: eth0: Gained carrier Jul 2 00:54:41.635350 systemd[1]: Finished lvm2-activation-early.service. Jul 2 00:54:41.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.636158 systemd[1]: Reached target cryptsetup.target. Jul 2 00:54:41.638121 systemd[1]: Starting lvm2-activation.service... Jul 2 00:54:41.641952 lvm[1122]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:54:41.642341 systemd-networkd[1093]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:54:41.665376 systemd[1]: Finished lvm2-activation.service. Jul 2 00:54:41.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.666148 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:54:41.666851 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:54:41.666889 systemd[1]: Reached target local-fs.target. Jul 2 00:54:41.667519 systemd[1]: Reached target machines.target. Jul 2 00:54:41.669716 systemd[1]: Starting ldconfig.service... Jul 2 00:54:41.671211 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:54:41.671272 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:54:41.672748 systemd[1]: Starting systemd-boot-update.service... Jul 2 00:54:41.675014 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 00:54:41.677233 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 00:54:41.679335 systemd[1]: Starting systemd-sysext.service... Jul 2 00:54:41.680395 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1125 (bootctl) Jul 2 00:54:41.681665 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 00:54:41.689715 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 00:54:41.695473 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 00:54:41.695745 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 00:54:41.700899 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 00:54:41.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.819382 kernel: loop0: detected capacity change from 0 to 193208 Jul 2 00:54:41.820059 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 00:54:41.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.830343 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:54:41.832601 systemd-fsck[1136]: fsck.fat 4.2 (2021-01-31) Jul 2 00:54:41.832601 systemd-fsck[1136]: /dev/vda1: 236 files, 117047/258078 clusters Jul 2 00:54:41.840450 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 00:54:41.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.858355 kernel: loop1: detected capacity change from 0 to 193208 Jul 2 00:54:41.862523 (sd-sysext)[1144]: Using extensions 'kubernetes'. Jul 2 00:54:41.862885 (sd-sysext)[1144]: Merged extensions into '/usr'. Jul 2 00:54:41.878877 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:54:41.880241 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:54:41.882070 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:54:41.884206 systemd[1]: Starting modprobe@loop.service... Jul 2 00:54:41.885037 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:54:41.885173 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:54:41.885947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:54:41.886195 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:54:41.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.887554 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:54:41.887726 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:54:41.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.889107 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:54:41.889412 systemd[1]: Finished modprobe@loop.service. Jul 2 00:54:41.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:41.890677 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:54:41.890783 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:54:41.942228 ldconfig[1124]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:54:41.946176 systemd[1]: Finished ldconfig.service. Jul 2 00:54:41.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.023422 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:54:42.025233 systemd[1]: Mounting boot.mount... Jul 2 00:54:42.027080 systemd[1]: Mounting usr-share-oem.mount... Jul 2 00:54:42.033377 systemd[1]: Mounted boot.mount. Jul 2 00:54:42.035542 systemd[1]: Mounted usr-share-oem.mount. Jul 2 00:54:42.037637 systemd[1]: Finished systemd-sysext.service. Jul 2 00:54:42.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.039617 systemd[1]: Starting ensure-sysext.service... Jul 2 00:54:42.041380 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 00:54:42.045806 systemd[1]: Reloading. Jul 2 00:54:42.050925 systemd-tmpfiles[1161]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 00:54:42.051621 systemd-tmpfiles[1161]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:54:42.053010 systemd-tmpfiles[1161]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:54:42.075857 /usr/lib/systemd/system-generators/torcx-generator[1182]: time="2024-07-02T00:54:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:54:42.076181 /usr/lib/systemd/system-generators/torcx-generator[1182]: time="2024-07-02T00:54:42Z" level=info msg="torcx already run" Jul 2 00:54:42.137878 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:54:42.137899 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:54:42.153323 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:54:42.198521 systemd[1]: Finished systemd-boot-update.service. Jul 2 00:54:42.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.200422 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 00:54:42.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.203526 systemd[1]: Starting audit-rules.service... Jul 2 00:54:42.205172 systemd[1]: Starting clean-ca-certificates.service... Jul 2 00:54:42.207227 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 00:54:42.209674 systemd[1]: Starting systemd-resolved.service... Jul 2 00:54:42.211784 systemd[1]: Starting systemd-timesyncd.service... Jul 2 00:54:42.213607 systemd[1]: Starting systemd-update-utmp.service... Jul 2 00:54:42.215034 systemd[1]: Finished clean-ca-certificates.service. Jul 2 00:54:42.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.218145 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:54:42.219000 audit[1234]: SYSTEM_BOOT pid=1234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.221852 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:54:42.223289 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:54:42.225200 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:54:42.228256 systemd[1]: Starting modprobe@loop.service... Jul 2 00:54:42.228886 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:54:42.229066 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:54:42.229264 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:54:42.230456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:54:42.230625 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:54:42.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.231852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:54:42.232041 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:54:42.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.233344 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:54:42.233504 systemd[1]: Finished modprobe@loop.service. Jul 2 00:54:42.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.235829 systemd[1]: Finished systemd-update-utmp.service. Jul 2 00:54:42.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.237766 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:54:42.239021 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:54:42.240848 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:54:42.243200 systemd[1]: Starting modprobe@loop.service... Jul 2 00:54:42.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.243959 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:54:42.244100 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:54:42.244203 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:54:42.245144 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 00:54:42.246359 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:54:42.246499 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:54:42.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.247588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:54:42.247725 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:54:42.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.248734 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:54:42.248901 systemd[1]: Finished modprobe@loop.service. Jul 2 00:54:42.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.250420 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:54:42.250516 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:54:42.252085 systemd[1]: Starting systemd-update-done.service... Jul 2 00:54:42.258984 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:54:42.260193 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:54:42.261989 systemd[1]: Starting modprobe@drm.service... Jul 2 00:54:42.264000 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:54:42.266146 systemd[1]: Starting modprobe@loop.service... Jul 2 00:54:42.268079 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:54:42.268208 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:54:42.269504 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 00:54:42.270232 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:54:42.271487 systemd[1]: Finished systemd-update-done.service. Jul 2 00:54:42.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.272890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:54:42.273046 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:54:42.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.274210 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:54:42.274360 systemd[1]: Finished modprobe@drm.service. Jul 2 00:54:42.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.275508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:54:42.275651 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:54:42.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.276775 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:54:42.276938 systemd[1]: Finished modprobe@loop.service. Jul 2 00:54:42.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.278220 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:54:42.278320 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:54:42.279544 systemd[1]: Finished ensure-sysext.service. Jul 2 00:54:42.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:54:42.283000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 00:54:42.283000 audit[1276]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff032fac0 a2=420 a3=0 items=0 ppid=1227 pid=1276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:54:42.283000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 00:54:42.284563 augenrules[1276]: No rules Jul 2 00:54:42.285935 systemd[1]: Finished audit-rules.service. Jul 2 00:54:42.296750 systemd[1]: Started systemd-timesyncd.service. Jul 2 00:54:42.780889 systemd-timesyncd[1233]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:54:42.780939 systemd-timesyncd[1233]: Initial clock synchronization to Tue 2024-07-02 00:54:42.780815 UTC. Jul 2 00:54:42.781067 systemd[1]: Reached target time-set.target. Jul 2 00:54:42.781752 systemd-resolved[1232]: Positive Trust Anchors: Jul 2 00:54:42.783698 systemd-resolved[1232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:54:42.783796 systemd-resolved[1232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:54:42.793289 systemd-resolved[1232]: Defaulting to hostname 'linux'. Jul 2 00:54:42.796561 systemd[1]: Started systemd-resolved.service. Jul 2 00:54:42.797205 systemd[1]: Reached target network.target. Jul 2 00:54:42.797776 systemd[1]: Reached target nss-lookup.target. Jul 2 00:54:42.798340 systemd[1]: Reached target sysinit.target. Jul 2 00:54:42.798974 systemd[1]: Started motdgen.path. Jul 2 00:54:42.799491 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 00:54:42.800440 systemd[1]: Started logrotate.timer. Jul 2 00:54:42.801091 systemd[1]: Started mdadm.timer. Jul 2 00:54:42.801581 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 00:54:42.802161 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:54:42.802185 systemd[1]: Reached target paths.target. Jul 2 00:54:42.802754 systemd[1]: Reached target timers.target. Jul 2 00:54:42.803609 systemd[1]: Listening on dbus.socket. Jul 2 00:54:42.805257 systemd[1]: Starting docker.socket... Jul 2 00:54:42.806822 systemd[1]: Listening on sshd.socket. Jul 2 00:54:42.807473 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:54:42.807843 systemd[1]: Listening on docker.socket. Jul 2 00:54:42.808425 systemd[1]: Reached target sockets.target. Jul 2 00:54:42.808990 systemd[1]: Reached target basic.target. Jul 2 00:54:42.809669 systemd[1]: System is tainted: cgroupsv1 Jul 2 00:54:42.809719 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:54:42.809739 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:54:42.810755 systemd[1]: Starting containerd.service... Jul 2 00:54:42.812395 systemd[1]: Starting dbus.service... Jul 2 00:54:42.814123 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 00:54:42.815992 systemd[1]: Starting extend-filesystems.service... Jul 2 00:54:42.816734 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 00:54:42.817864 systemd[1]: Starting motdgen.service... Jul 2 00:54:42.819492 systemd[1]: Starting prepare-helm.service... Jul 2 00:54:42.821292 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 00:54:42.823419 systemd[1]: Starting sshd-keygen.service... Jul 2 00:54:42.825973 systemd[1]: Starting systemd-logind.service... Jul 2 00:54:42.826552 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:54:42.826632 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:54:42.827803 systemd[1]: Starting update-engine.service... Jul 2 00:54:42.829579 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 00:54:42.833551 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:54:42.833829 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 00:54:42.838387 jq[1290]: false Jul 2 00:54:42.842801 jq[1305]: true Jul 2 00:54:42.841856 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:54:42.842099 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 00:54:42.855753 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:54:42.856897 systemd[1]: Finished motdgen.service. Jul 2 00:54:42.858176 jq[1316]: true Jul 2 00:54:42.858843 extend-filesystems[1291]: Found loop1 Jul 2 00:54:42.858843 extend-filesystems[1291]: Found vda Jul 2 00:54:42.860727 extend-filesystems[1291]: Found vda1 Jul 2 00:54:42.860727 extend-filesystems[1291]: Found vda2 Jul 2 00:54:42.860727 extend-filesystems[1291]: Found vda3 Jul 2 00:54:42.860727 extend-filesystems[1291]: Found usr Jul 2 00:54:42.860727 extend-filesystems[1291]: Found vda4 Jul 2 00:54:42.860727 extend-filesystems[1291]: Found vda6 Jul 2 00:54:42.860727 extend-filesystems[1291]: Found vda7 Jul 2 00:54:42.860727 extend-filesystems[1291]: Found vda9 Jul 2 00:54:42.860727 extend-filesystems[1291]: Checking size of /dev/vda9 Jul 2 00:54:42.883465 tar[1309]: linux-arm64/helm Jul 2 00:54:42.895315 extend-filesystems[1291]: Resized partition /dev/vda9 Jul 2 00:54:42.926381 dbus-daemon[1289]: [system] SELinux support is enabled Jul 2 00:54:42.926594 systemd[1]: Started dbus.service. Jul 2 00:54:42.929822 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:54:42.930073 systemd[1]: Reached target system-config.target. Jul 2 00:54:42.930941 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:54:42.930978 systemd[1]: Reached target user-config.target. Jul 2 00:54:42.940225 systemd-logind[1300]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 00:54:42.941091 systemd-logind[1300]: New seat seat0. Jul 2 00:54:42.946933 systemd[1]: Started systemd-logind.service. Jul 2 00:54:42.952076 extend-filesystems[1345]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 00:54:42.964250 bash[1346]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:54:42.965268 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 00:54:42.966590 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:54:42.980558 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:54:42.989856 extend-filesystems[1345]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:54:42.989856 extend-filesystems[1345]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:54:42.989856 extend-filesystems[1345]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:54:43.000688 extend-filesystems[1291]: Resized filesystem in /dev/vda9 Jul 2 00:54:43.003193 update_engine[1303]: I0702 00:54:43.002290 1303 main.cc:92] Flatcar Update Engine starting Jul 2 00:54:43.001678 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:54:43.001919 systemd[1]: Finished extend-filesystems.service. Jul 2 00:54:43.011987 systemd[1]: Started update-engine.service. Jul 2 00:54:43.012651 update_engine[1303]: I0702 00:54:43.012012 1303 update_check_scheduler.cc:74] Next update check in 9m39s Jul 2 00:54:43.014427 systemd[1]: Started locksmithd.service. Jul 2 00:54:43.080984 env[1312]: time="2024-07-02T00:54:43.080881492Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 00:54:43.102836 env[1312]: time="2024-07-02T00:54:43.102786212Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:54:43.103112 env[1312]: time="2024-07-02T00:54:43.103091452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:54:43.104353 env[1312]: time="2024-07-02T00:54:43.104323852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:54:43.104439 env[1312]: time="2024-07-02T00:54:43.104423892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:54:43.104767 env[1312]: time="2024-07-02T00:54:43.104743332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:54:43.104865 env[1312]: time="2024-07-02T00:54:43.104849612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:54:43.104924 env[1312]: time="2024-07-02T00:54:43.104909172Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:54:43.104974 env[1312]: time="2024-07-02T00:54:43.104961972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:54:43.105116 env[1312]: time="2024-07-02T00:54:43.105097972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:54:43.105556 env[1312]: time="2024-07-02T00:54:43.105511892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:54:43.105795 env[1312]: time="2024-07-02T00:54:43.105774532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:54:43.105867 env[1312]: time="2024-07-02T00:54:43.105853532Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:54:43.105975 env[1312]: time="2024-07-02T00:54:43.105957012Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:54:43.106049 env[1312]: time="2024-07-02T00:54:43.106034532Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116032572Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116064692Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116087492Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116124492Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116138092Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116151972Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116167652Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116510972Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116549772Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116564652Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116583132Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116596012Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116699172Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:54:43.116997 env[1312]: time="2024-07-02T00:54:43.116767932Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:54:43.117320 env[1312]: time="2024-07-02T00:54:43.117123812Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:54:43.117320 env[1312]: time="2024-07-02T00:54:43.117161972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117320 env[1312]: time="2024-07-02T00:54:43.117176372Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:54:43.117492 env[1312]: time="2024-07-02T00:54:43.117455892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117492 env[1312]: time="2024-07-02T00:54:43.117474212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117492 env[1312]: time="2024-07-02T00:54:43.117488492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117579 env[1312]: time="2024-07-02T00:54:43.117500212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117579 env[1312]: time="2024-07-02T00:54:43.117512492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117579 env[1312]: time="2024-07-02T00:54:43.117532492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117579 env[1312]: time="2024-07-02T00:54:43.117545412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117579 env[1312]: time="2024-07-02T00:54:43.117557132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117579 env[1312]: time="2024-07-02T00:54:43.117571212Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:54:43.117709 env[1312]: time="2024-07-02T00:54:43.117699532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117731 env[1312]: time="2024-07-02T00:54:43.117715452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117752 env[1312]: time="2024-07-02T00:54:43.117728172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.117752 env[1312]: time="2024-07-02T00:54:43.117741212Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:54:43.117794 env[1312]: time="2024-07-02T00:54:43.117755452Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 00:54:43.117794 env[1312]: time="2024-07-02T00:54:43.117767572Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:54:43.117794 env[1312]: time="2024-07-02T00:54:43.117784812Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 00:54:43.117890 env[1312]: time="2024-07-02T00:54:43.117817532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:54:43.118077 env[1312]: time="2024-07-02T00:54:43.118017652Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.118088652Z" level=info msg="Connect containerd service" Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.118122052Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.119180212Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.119717972Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.119758132Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.120799732Z" level=info msg="containerd successfully booted in 0.040950s" Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.123218492Z" level=info msg="Start subscribing containerd event" Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.123289532Z" level=info msg="Start recovering state" Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.123355692Z" level=info msg="Start event monitor" Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.123378892Z" level=info msg="Start snapshots syncer" Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.123390052Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:54:43.127586 env[1312]: time="2024-07-02T00:54:43.123397772Z" level=info msg="Start streaming server" Jul 2 00:54:43.119922 systemd[1]: Started containerd.service. Jul 2 00:54:43.130495 locksmithd[1351]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:54:43.301011 tar[1309]: linux-arm64/LICENSE Jul 2 00:54:43.301191 tar[1309]: linux-arm64/README.md Jul 2 00:54:43.305155 systemd[1]: Finished prepare-helm.service. Jul 2 00:54:43.497632 systemd-networkd[1093]: eth0: Gained IPv6LL Jul 2 00:54:43.499294 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 00:54:43.500303 systemd[1]: Reached target network-online.target. Jul 2 00:54:43.502600 systemd[1]: Starting kubelet.service... Jul 2 00:54:43.998474 systemd[1]: Started kubelet.service. Jul 2 00:54:44.496402 kubelet[1373]: E0702 00:54:44.496261 1373 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:54:44.498652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:54:44.498795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:54:44.992992 sshd_keygen[1307]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:54:45.010902 systemd[1]: Finished sshd-keygen.service. Jul 2 00:54:45.013061 systemd[1]: Starting issuegen.service... Jul 2 00:54:45.017690 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:54:45.017916 systemd[1]: Finished issuegen.service. Jul 2 00:54:45.020194 systemd[1]: Starting systemd-user-sessions.service... Jul 2 00:54:45.031463 systemd[1]: Finished systemd-user-sessions.service. Jul 2 00:54:45.033693 systemd[1]: Started getty@tty1.service. Jul 2 00:54:45.035680 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 2 00:54:45.036504 systemd[1]: Reached target getty.target. Jul 2 00:54:45.037159 systemd[1]: Reached target multi-user.target. Jul 2 00:54:45.039152 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 00:54:45.045793 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 00:54:45.046015 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 00:54:45.046911 systemd[1]: Startup finished in 4.984s (kernel) + 5.555s (userspace) = 10.540s. Jul 2 00:54:47.368779 systemd[1]: Created slice system-sshd.slice. Jul 2 00:54:47.370425 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:37064.service. Jul 2 00:54:47.422769 sshd[1401]: Accepted publickey for core from 10.0.0.1 port 37064 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:54:47.424825 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:54:47.433959 systemd-logind[1300]: New session 1 of user core. Jul 2 00:54:47.435053 systemd[1]: Created slice user-500.slice. Jul 2 00:54:47.436247 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 00:54:47.445291 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 00:54:47.446762 systemd[1]: Starting user@500.service... Jul 2 00:54:47.450730 (systemd)[1406]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:54:47.517632 systemd[1406]: Queued start job for default target default.target. Jul 2 00:54:47.517873 systemd[1406]: Reached target paths.target. Jul 2 00:54:47.517888 systemd[1406]: Reached target sockets.target. Jul 2 00:54:47.517898 systemd[1406]: Reached target timers.target. Jul 2 00:54:47.517921 systemd[1406]: Reached target basic.target. Jul 2 00:54:47.517968 systemd[1406]: Reached target default.target. Jul 2 00:54:47.517989 systemd[1406]: Startup finished in 61ms. Jul 2 00:54:47.518124 systemd[1]: Started user@500.service. Jul 2 00:54:47.519081 systemd[1]: Started session-1.scope. Jul 2 00:54:47.568871 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:37068.service. Jul 2 00:54:47.612353 sshd[1415]: Accepted publickey for core from 10.0.0.1 port 37068 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:54:47.614113 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:54:47.618667 systemd[1]: Started session-2.scope. Jul 2 00:54:47.619087 systemd-logind[1300]: New session 2 of user core. Jul 2 00:54:47.673257 sshd[1415]: pam_unix(sshd:session): session closed for user core Jul 2 00:54:47.675423 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:37076.service. Jul 2 00:54:47.676976 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:37068.service: Deactivated successfully. Jul 2 00:54:47.677830 systemd-logind[1300]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:54:47.677885 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:54:47.678510 systemd-logind[1300]: Removed session 2. Jul 2 00:54:47.718506 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 37076 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:54:47.719757 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:54:47.723029 systemd-logind[1300]: New session 3 of user core. Jul 2 00:54:47.723826 systemd[1]: Started session-3.scope. Jul 2 00:54:47.773641 sshd[1420]: pam_unix(sshd:session): session closed for user core Jul 2 00:54:47.775846 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:37088.service. Jul 2 00:54:47.776372 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:37076.service: Deactivated successfully. Jul 2 00:54:47.777313 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:54:47.777336 systemd-logind[1300]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:54:47.778130 systemd-logind[1300]: Removed session 3. Jul 2 00:54:47.818640 sshd[1427]: Accepted publickey for core from 10.0.0.1 port 37088 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:54:47.819694 sshd[1427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:54:47.822601 systemd-logind[1300]: New session 4 of user core. Jul 2 00:54:47.823431 systemd[1]: Started session-4.scope. Jul 2 00:54:47.877553 sshd[1427]: pam_unix(sshd:session): session closed for user core Jul 2 00:54:47.880060 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:37104.service. Jul 2 00:54:47.880873 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:37088.service: Deactivated successfully. Jul 2 00:54:47.881790 systemd-logind[1300]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:54:47.881955 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:54:47.882626 systemd-logind[1300]: Removed session 4. Jul 2 00:54:47.921929 sshd[1434]: Accepted publickey for core from 10.0.0.1 port 37104 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:54:47.923015 sshd[1434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:54:47.925970 systemd-logind[1300]: New session 5 of user core. Jul 2 00:54:47.926865 systemd[1]: Started session-5.scope. Jul 2 00:54:47.992486 sudo[1440]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:54:47.992719 sudo[1440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:54:48.050143 systemd[1]: Starting docker.service... Jul 2 00:54:48.131951 env[1452]: time="2024-07-02T00:54:48.131427772Z" level=info msg="Starting up" Jul 2 00:54:48.133446 env[1452]: time="2024-07-02T00:54:48.133408212Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 00:54:48.133446 env[1452]: time="2024-07-02T00:54:48.133425732Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 00:54:48.133446 env[1452]: time="2024-07-02T00:54:48.133447252Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 00:54:48.133569 env[1452]: time="2024-07-02T00:54:48.133457612Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 00:54:48.139623 env[1452]: time="2024-07-02T00:54:48.139592452Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 00:54:48.139623 env[1452]: time="2024-07-02T00:54:48.139615452Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 00:54:48.139714 env[1452]: time="2024-07-02T00:54:48.139631932Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 00:54:48.139714 env[1452]: time="2024-07-02T00:54:48.139641172Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 00:54:48.328025 env[1452]: time="2024-07-02T00:54:48.327987372Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 2 00:54:48.328222 env[1452]: time="2024-07-02T00:54:48.328206772Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 2 00:54:48.328424 env[1452]: time="2024-07-02T00:54:48.328407372Z" level=info msg="Loading containers: start." Jul 2 00:54:48.444561 kernel: Initializing XFRM netlink socket Jul 2 00:54:48.471305 env[1452]: time="2024-07-02T00:54:48.471269172Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 00:54:48.523983 systemd-networkd[1093]: docker0: Link UP Jul 2 00:54:48.533182 env[1452]: time="2024-07-02T00:54:48.533149412Z" level=info msg="Loading containers: done." Jul 2 00:54:48.550889 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2596704755-merged.mount: Deactivated successfully. Jul 2 00:54:48.553393 env[1452]: time="2024-07-02T00:54:48.553343452Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:54:48.553572 env[1452]: time="2024-07-02T00:54:48.553555252Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 00:54:48.553666 env[1452]: time="2024-07-02T00:54:48.553651852Z" level=info msg="Daemon has completed initialization" Jul 2 00:54:48.568903 systemd[1]: Started docker.service. Jul 2 00:54:48.575952 env[1452]: time="2024-07-02T00:54:48.575895812Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:54:49.126145 env[1312]: time="2024-07-02T00:54:49.125885412Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 00:54:49.637734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482441821.mount: Deactivated successfully. Jul 2 00:54:51.291102 env[1312]: time="2024-07-02T00:54:51.291033612Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:51.293057 env[1312]: time="2024-07-02T00:54:51.293019652Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:51.294979 env[1312]: time="2024-07-02T00:54:51.294940732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:51.296638 env[1312]: time="2024-07-02T00:54:51.296609772Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:51.298290 env[1312]: time="2024-07-02T00:54:51.298252812Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 00:54:51.307815 env[1312]: time="2024-07-02T00:54:51.307773932Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 00:54:52.958280 env[1312]: time="2024-07-02T00:54:52.958223452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:52.962571 env[1312]: time="2024-07-02T00:54:52.962534772Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:52.964388 env[1312]: time="2024-07-02T00:54:52.964353732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:52.966484 env[1312]: time="2024-07-02T00:54:52.966451652Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:52.967289 env[1312]: time="2024-07-02T00:54:52.967248892Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 00:54:52.976287 env[1312]: time="2024-07-02T00:54:52.976255692Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 00:54:54.137236 env[1312]: time="2024-07-02T00:54:54.137186732Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:54.139000 env[1312]: time="2024-07-02T00:54:54.138966652Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:54.141123 env[1312]: time="2024-07-02T00:54:54.141097012Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:54.142635 env[1312]: time="2024-07-02T00:54:54.142607092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:54.143423 env[1312]: time="2024-07-02T00:54:54.143395932Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 00:54:54.153468 env[1312]: time="2024-07-02T00:54:54.153441452Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:54:54.548220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:54:54.548379 systemd[1]: Stopped kubelet.service. Jul 2 00:54:54.549912 systemd[1]: Starting kubelet.service... Jul 2 00:54:54.628397 systemd[1]: Started kubelet.service. Jul 2 00:54:54.669395 kubelet[1617]: E0702 00:54:54.669336 1617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:54:54.672329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:54:54.672471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:54:55.247938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3306135724.mount: Deactivated successfully. Jul 2 00:54:56.379450 env[1312]: time="2024-07-02T00:54:56.379402492Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:56.381026 env[1312]: time="2024-07-02T00:54:56.380996252Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:56.382490 env[1312]: time="2024-07-02T00:54:56.382453372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:56.383809 env[1312]: time="2024-07-02T00:54:56.383782572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:56.384165 env[1312]: time="2024-07-02T00:54:56.384138732Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 00:54:56.393797 env[1312]: time="2024-07-02T00:54:56.393761532Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:54:56.888325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2336232320.mount: Deactivated successfully. Jul 2 00:54:56.892170 env[1312]: time="2024-07-02T00:54:56.892129412Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:56.893384 env[1312]: time="2024-07-02T00:54:56.893353972Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:56.895039 env[1312]: time="2024-07-02T00:54:56.895004572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:56.896932 env[1312]: time="2024-07-02T00:54:56.896899372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:56.897475 env[1312]: time="2024-07-02T00:54:56.897438372Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 00:54:56.906234 env[1312]: time="2024-07-02T00:54:56.906189172Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:54:57.423579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488745405.mount: Deactivated successfully. Jul 2 00:54:59.332306 env[1312]: time="2024-07-02T00:54:59.332246372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:59.333585 env[1312]: time="2024-07-02T00:54:59.333556612Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:59.337630 env[1312]: time="2024-07-02T00:54:59.337592412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:59.339282 env[1312]: time="2024-07-02T00:54:59.339254052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:54:59.340121 env[1312]: time="2024-07-02T00:54:59.340083052Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 00:54:59.350040 env[1312]: time="2024-07-02T00:54:59.350001972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 00:54:59.928175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2762911226.mount: Deactivated successfully. Jul 2 00:55:00.489338 env[1312]: time="2024-07-02T00:55:00.489291732Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:00.490738 env[1312]: time="2024-07-02T00:55:00.490713052Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:00.492613 env[1312]: time="2024-07-02T00:55:00.492581532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:00.493645 env[1312]: time="2024-07-02T00:55:00.493617652Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:00.494900 env[1312]: time="2024-07-02T00:55:00.494868092Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 00:55:04.798208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:55:04.798378 systemd[1]: Stopped kubelet.service. Jul 2 00:55:04.799836 systemd[1]: Starting kubelet.service... Jul 2 00:55:04.878768 systemd[1]: Started kubelet.service. Jul 2 00:55:04.920099 kubelet[1726]: E0702 00:55:04.920046 1726 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:55:04.922480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:55:04.922636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:55:05.104838 systemd[1]: Stopped kubelet.service. Jul 2 00:55:05.106816 systemd[1]: Starting kubelet.service... Jul 2 00:55:05.123568 systemd[1]: Reloading. Jul 2 00:55:05.176347 /usr/lib/systemd/system-generators/torcx-generator[1763]: time="2024-07-02T00:55:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:55:05.176738 /usr/lib/systemd/system-generators/torcx-generator[1763]: time="2024-07-02T00:55:05Z" level=info msg="torcx already run" Jul 2 00:55:05.318240 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:55:05.318260 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:55:05.333366 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:55:05.394589 systemd[1]: Started kubelet.service. Jul 2 00:55:05.395811 systemd[1]: Stopping kubelet.service... Jul 2 00:55:05.396064 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:55:05.396322 systemd[1]: Stopped kubelet.service. Jul 2 00:55:05.398136 systemd[1]: Starting kubelet.service... Jul 2 00:55:05.476132 systemd[1]: Started kubelet.service. Jul 2 00:55:05.525069 kubelet[1820]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:55:05.525069 kubelet[1820]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:55:05.525069 kubelet[1820]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:55:05.525674 kubelet[1820]: I0702 00:55:05.525105 1820 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:55:06.001386 kubelet[1820]: I0702 00:55:06.001353 1820 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:55:06.001386 kubelet[1820]: I0702 00:55:06.001380 1820 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:55:06.001611 kubelet[1820]: I0702 00:55:06.001596 1820 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:55:06.015618 kubelet[1820]: I0702 00:55:06.015600 1820 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:55:06.022810 kubelet[1820]: E0702 00:55:06.022789 1820 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:06.035028 kubelet[1820]: W0702 00:55:06.035000 1820 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 00:55:06.035739 kubelet[1820]: I0702 00:55:06.035715 1820 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:55:06.037519 kubelet[1820]: I0702 00:55:06.037500 1820 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:55:06.037706 kubelet[1820]: I0702 00:55:06.037682 1820 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:55:06.037781 kubelet[1820]: I0702 00:55:06.037712 1820 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:55:06.037781 kubelet[1820]: I0702 00:55:06.037721 1820 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:55:06.037898 kubelet[1820]: I0702 00:55:06.037874 1820 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:55:06.040770 kubelet[1820]: I0702 00:55:06.040749 1820 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:55:06.040810 kubelet[1820]: I0702 00:55:06.040773 1820 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:55:06.040876 kubelet[1820]: I0702 00:55:06.040863 1820 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:55:06.040876 kubelet[1820]: I0702 00:55:06.040877 1820 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:55:06.043337 kubelet[1820]: W0702 00:55:06.043291 1820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:06.043519 kubelet[1820]: E0702 00:55:06.043504 1820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:06.043626 kubelet[1820]: W0702 00:55:06.043308 1820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:06.043718 kubelet[1820]: E0702 00:55:06.043704 1820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:06.044252 kubelet[1820]: I0702 00:55:06.044238 1820 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 00:55:06.048048 kubelet[1820]: W0702 00:55:06.048019 1820 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:55:06.050341 kubelet[1820]: I0702 00:55:06.050318 1820 server.go:1232] "Started kubelet" Jul 2 00:55:06.051686 kubelet[1820]: E0702 00:55:06.051651 1820 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:55:06.051777 kubelet[1820]: E0702 00:55:06.051685 1820 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:55:06.051895 kubelet[1820]: I0702 00:55:06.051872 1820 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:55:06.052157 kubelet[1820]: I0702 00:55:06.052135 1820 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:55:06.052202 kubelet[1820]: I0702 00:55:06.052192 1820 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:55:06.052677 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 00:55:06.052803 kubelet[1820]: I0702 00:55:06.052782 1820 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:55:06.053401 kubelet[1820]: I0702 00:55:06.053313 1820 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:55:06.056881 kubelet[1820]: E0702 00:55:06.056856 1820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:55:06.056881 kubelet[1820]: I0702 00:55:06.056885 1820 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:55:06.056985 kubelet[1820]: I0702 00:55:06.056973 1820 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:55:06.057050 kubelet[1820]: I0702 00:55:06.057039 1820 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:55:06.057393 kubelet[1820]: W0702 00:55:06.057340 1820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:06.057440 kubelet[1820]: E0702 00:55:06.057402 1820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:06.057792 kubelet[1820]: E0702 00:55:06.057768 1820 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Jul 2 00:55:06.058322 kubelet[1820]: E0702 00:55:06.058231 1820 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de3f4ca0d20954", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 55, 6, 50292052, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 55, 6, 50292052, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.97:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.97:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:55:06.072984 kubelet[1820]: I0702 00:55:06.072957 1820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:55:06.073962 kubelet[1820]: I0702 00:55:06.073940 1820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:55:06.073962 kubelet[1820]: I0702 00:55:06.073961 1820 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:55:06.074067 kubelet[1820]: I0702 00:55:06.073977 1820 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:55:06.074067 kubelet[1820]: E0702 00:55:06.074028 1820 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:55:06.074451 kubelet[1820]: W0702 00:55:06.074428 1820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:06.074521 kubelet[1820]: E0702 00:55:06.074459 1820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:06.094728 kubelet[1820]: I0702 00:55:06.094705 1820 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:55:06.094866 kubelet[1820]: I0702 00:55:06.094854 1820 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:55:06.094957 kubelet[1820]: I0702 00:55:06.094947 1820 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:55:06.096764 kubelet[1820]: I0702 00:55:06.096746 1820 policy_none.go:49] "None policy: Start" Jul 2 00:55:06.097309 kubelet[1820]: I0702 00:55:06.097291 1820 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:55:06.097375 kubelet[1820]: I0702 00:55:06.097330 1820 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:55:06.104889 kubelet[1820]: I0702 00:55:06.104864 1820 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:55:06.105870 kubelet[1820]: I0702 00:55:06.105849 1820 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:55:06.106157 kubelet[1820]: E0702 00:55:06.106142 1820 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:55:06.158422 kubelet[1820]: I0702 00:55:06.158402 1820 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:55:06.158989 kubelet[1820]: E0702 00:55:06.158965 1820 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 2 00:55:06.174259 kubelet[1820]: I0702 00:55:06.174234 1820 topology_manager.go:215] "Topology Admit Handler" podUID="1a5cfbbcbf128b888d509f7c92d652af" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:55:06.175167 kubelet[1820]: I0702 00:55:06.175138 1820 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:55:06.175930 kubelet[1820]: I0702 00:55:06.175910 1820 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:55:06.259404 kubelet[1820]: E0702 00:55:06.258712 1820 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Jul 2 00:55:06.259404 kubelet[1820]: I0702 00:55:06.258735 1820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a5cfbbcbf128b888d509f7c92d652af-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a5cfbbcbf128b888d509f7c92d652af\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:55:06.259404 kubelet[1820]: I0702 00:55:06.258774 1820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a5cfbbcbf128b888d509f7c92d652af-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a5cfbbcbf128b888d509f7c92d652af\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:55:06.259404 kubelet[1820]: I0702 00:55:06.258793 1820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:06.259404 kubelet[1820]: I0702 00:55:06.258812 1820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:06.259637 kubelet[1820]: I0702 00:55:06.258830 1820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:55:06.259637 kubelet[1820]: I0702 00:55:06.258847 1820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a5cfbbcbf128b888d509f7c92d652af-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a5cfbbcbf128b888d509f7c92d652af\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:55:06.259637 kubelet[1820]: I0702 00:55:06.258883 1820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:06.259637 kubelet[1820]: I0702 00:55:06.258913 1820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:06.259637 kubelet[1820]: I0702 00:55:06.258933 1820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:06.360284 kubelet[1820]: I0702 00:55:06.360258 1820 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:55:06.360755 kubelet[1820]: E0702 00:55:06.360720 1820 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 2 00:55:06.485054 kubelet[1820]: E0702 00:55:06.485019 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:06.485151 kubelet[1820]: E0702 00:55:06.485072 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:06.485943 env[1312]: time="2024-07-02T00:55:06.485761812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 00:55:06.485943 env[1312]: time="2024-07-02T00:55:06.485820332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 00:55:06.487967 kubelet[1820]: E0702 00:55:06.487942 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:06.488436 env[1312]: time="2024-07-02T00:55:06.488402932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a5cfbbcbf128b888d509f7c92d652af,Namespace:kube-system,Attempt:0,}" Jul 2 00:55:06.659821 kubelet[1820]: E0702 00:55:06.659790 1820 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Jul 2 00:55:06.762185 kubelet[1820]: I0702 00:55:06.762152 1820 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:55:06.762650 kubelet[1820]: E0702 00:55:06.762634 1820 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 2 00:55:06.902254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819801316.mount: Deactivated successfully. Jul 2 00:55:06.906217 env[1312]: time="2024-07-02T00:55:06.906172172Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.908840 env[1312]: time="2024-07-02T00:55:06.908802852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.909718 env[1312]: time="2024-07-02T00:55:06.909682132Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.910682 env[1312]: time="2024-07-02T00:55:06.910607452Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.912043 env[1312]: time="2024-07-02T00:55:06.912006132Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.913478 env[1312]: time="2024-07-02T00:55:06.913450532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.915909 env[1312]: time="2024-07-02T00:55:06.915885132Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.918957 env[1312]: time="2024-07-02T00:55:06.918929492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.921411 env[1312]: time="2024-07-02T00:55:06.921382852Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.922146 env[1312]: time="2024-07-02T00:55:06.922115412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.922810 env[1312]: time="2024-07-02T00:55:06.922788572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.923518 env[1312]: time="2024-07-02T00:55:06.923493572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:06.937458 kubelet[1820]: W0702 00:55:06.936697 1820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:06.937458 kubelet[1820]: E0702 00:55:06.936771 1820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:06.949631 env[1312]: time="2024-07-02T00:55:06.948812812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:55:06.949631 env[1312]: time="2024-07-02T00:55:06.948843812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:55:06.949631 env[1312]: time="2024-07-02T00:55:06.948854452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:55:06.949631 env[1312]: time="2024-07-02T00:55:06.949055412Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6959db57dfc74009562389538667dc94a16bda5d72390ac1ee8af69125f8870 pid=1873 runtime=io.containerd.runc.v2 Jul 2 00:55:06.949631 env[1312]: time="2024-07-02T00:55:06.948554252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:55:06.949631 env[1312]: time="2024-07-02T00:55:06.948597012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:55:06.949631 env[1312]: time="2024-07-02T00:55:06.948608372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:55:06.949631 env[1312]: time="2024-07-02T00:55:06.948904412Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd839d674f635fc82edfe5b9ef4d72a8b0b8c1b662a0254e99921161de8aba2b pid=1872 runtime=io.containerd.runc.v2 Jul 2 00:55:06.950011 env[1312]: time="2024-07-02T00:55:06.949927412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:55:06.950011 env[1312]: time="2024-07-02T00:55:06.949959972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:55:06.950011 env[1312]: time="2024-07-02T00:55:06.949969492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:55:06.950151 env[1312]: time="2024-07-02T00:55:06.950110212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b3bdf35eea7895a35a120f110da8a5a8af9e1114a0ea18fb3ccc8fc9ac55fba pid=1888 runtime=io.containerd.runc.v2 Jul 2 00:55:07.017973 kubelet[1820]: W0702 00:55:07.017907 1820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:07.017973 kubelet[1820]: E0702 00:55:07.017971 1820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:07.039698 env[1312]: time="2024-07-02T00:55:07.039631212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a5cfbbcbf128b888d509f7c92d652af,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6959db57dfc74009562389538667dc94a16bda5d72390ac1ee8af69125f8870\"" Jul 2 00:55:07.041607 kubelet[1820]: E0702 00:55:07.041578 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:07.046845 env[1312]: time="2024-07-02T00:55:07.046801172Z" level=info msg="CreateContainer within sandbox \"a6959db57dfc74009562389538667dc94a16bda5d72390ac1ee8af69125f8870\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:55:07.048084 env[1312]: time="2024-07-02T00:55:07.048053772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd839d674f635fc82edfe5b9ef4d72a8b0b8c1b662a0254e99921161de8aba2b\"" Jul 2 00:55:07.049130 kubelet[1820]: E0702 00:55:07.049107 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:07.051429 env[1312]: time="2024-07-02T00:55:07.051387052Z" level=info msg="CreateContainer within sandbox \"bd839d674f635fc82edfe5b9ef4d72a8b0b8c1b662a0254e99921161de8aba2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:55:07.058552 env[1312]: time="2024-07-02T00:55:07.058493292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b3bdf35eea7895a35a120f110da8a5a8af9e1114a0ea18fb3ccc8fc9ac55fba\"" Jul 2 00:55:07.059203 kubelet[1820]: E0702 00:55:07.059176 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:07.061762 env[1312]: time="2024-07-02T00:55:07.061722492Z" level=info msg="CreateContainer within sandbox \"6b3bdf35eea7895a35a120f110da8a5a8af9e1114a0ea18fb3ccc8fc9ac55fba\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:55:07.075682 env[1312]: time="2024-07-02T00:55:07.075634932Z" level=info msg="CreateContainer within sandbox \"a6959db57dfc74009562389538667dc94a16bda5d72390ac1ee8af69125f8870\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"348e4fcaa7ae8b2a444692cb9d8f071558cc3491858bae756b581119a3228513\"" Jul 2 00:55:07.076397 env[1312]: time="2024-07-02T00:55:07.076360532Z" level=info msg="StartContainer for \"348e4fcaa7ae8b2a444692cb9d8f071558cc3491858bae756b581119a3228513\"" Jul 2 00:55:07.078770 env[1312]: time="2024-07-02T00:55:07.078726652Z" level=info msg="CreateContainer within sandbox \"bd839d674f635fc82edfe5b9ef4d72a8b0b8c1b662a0254e99921161de8aba2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ecb0800145ee7c46cb10be972a7432643ae863355bd191860e4e1501cd0c0d7\"" Jul 2 00:55:07.079269 env[1312]: time="2024-07-02T00:55:07.079230612Z" level=info msg="StartContainer for \"6ecb0800145ee7c46cb10be972a7432643ae863355bd191860e4e1501cd0c0d7\"" Jul 2 00:55:07.082776 env[1312]: time="2024-07-02T00:55:07.082735852Z" level=info msg="CreateContainer within sandbox \"6b3bdf35eea7895a35a120f110da8a5a8af9e1114a0ea18fb3ccc8fc9ac55fba\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40f13093ae11555b1a92cdb358e39ca91950b9154b1321ecb784d6eb558e7a79\"" Jul 2 00:55:07.083262 env[1312]: time="2024-07-02T00:55:07.083235532Z" level=info msg="StartContainer for \"40f13093ae11555b1a92cdb358e39ca91950b9154b1321ecb784d6eb558e7a79\"" Jul 2 00:55:07.156096 env[1312]: time="2024-07-02T00:55:07.156040412Z" level=info msg="StartContainer for \"348e4fcaa7ae8b2a444692cb9d8f071558cc3491858bae756b581119a3228513\" returns successfully" Jul 2 00:55:07.195370 env[1312]: time="2024-07-02T00:55:07.191515092Z" level=info msg="StartContainer for \"40f13093ae11555b1a92cdb358e39ca91950b9154b1321ecb784d6eb558e7a79\" returns successfully" Jul 2 00:55:07.195370 env[1312]: time="2024-07-02T00:55:07.192109212Z" level=info msg="StartContainer for \"6ecb0800145ee7c46cb10be972a7432643ae863355bd191860e4e1501cd0c0d7\" returns successfully" Jul 2 00:55:07.317171 kubelet[1820]: W0702 00:55:07.317113 1820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:07.317171 kubelet[1820]: E0702 00:55:07.317177 1820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:07.349780 kubelet[1820]: W0702 00:55:07.349722 1820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:07.349878 kubelet[1820]: E0702 00:55:07.349789 1820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 2 00:55:07.565010 kubelet[1820]: I0702 00:55:07.564723 1820 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:55:08.085574 kubelet[1820]: E0702 00:55:08.085520 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:08.087698 kubelet[1820]: E0702 00:55:08.087606 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:08.089127 kubelet[1820]: E0702 00:55:08.089104 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:08.645131 kubelet[1820]: E0702 00:55:08.645098 1820 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 00:55:08.704945 kubelet[1820]: I0702 00:55:08.704909 1820 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:55:09.044379 kubelet[1820]: I0702 00:55:09.044284 1820 apiserver.go:52] "Watching apiserver" Jul 2 00:55:09.057956 kubelet[1820]: I0702 00:55:09.057929 1820 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:55:09.094251 kubelet[1820]: E0702 00:55:09.094226 1820 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 2 00:55:09.094577 kubelet[1820]: E0702 00:55:09.094334 1820 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 2 00:55:09.094577 kubelet[1820]: E0702 00:55:09.094336 1820 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:09.094627 kubelet[1820]: E0702 00:55:09.094618 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:09.094726 kubelet[1820]: E0702 00:55:09.094708 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:09.094763 kubelet[1820]: E0702 00:55:09.094739 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:10.094884 kubelet[1820]: E0702 00:55:10.094858 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:10.402748 kubelet[1820]: E0702 00:55:10.402650 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:11.088968 systemd[1]: Reloading. Jul 2 00:55:11.091599 kubelet[1820]: E0702 00:55:11.091568 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:11.092283 kubelet[1820]: E0702 00:55:11.092268 1820 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:11.133905 /usr/lib/systemd/system-generators/torcx-generator[2118]: time="2024-07-02T00:55:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:55:11.133937 /usr/lib/systemd/system-generators/torcx-generator[2118]: time="2024-07-02T00:55:11Z" level=info msg="torcx already run" Jul 2 00:55:11.288393 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:55:11.288415 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:55:11.303715 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:55:11.371045 systemd[1]: Stopping kubelet.service... Jul 2 00:55:11.371231 kubelet[1820]: I0702 00:55:11.371123 1820 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:55:11.389926 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:55:11.390265 systemd[1]: Stopped kubelet.service. Jul 2 00:55:11.391864 systemd[1]: Starting kubelet.service... Jul 2 00:55:11.473056 systemd[1]: Started kubelet.service. Jul 2 00:55:11.534838 sudo[2185]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:55:11.535066 sudo[2185]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:55:11.539064 kubelet[2172]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:55:11.539064 kubelet[2172]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:55:11.539064 kubelet[2172]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:55:11.539064 kubelet[2172]: I0702 00:55:11.538711 2172 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:55:11.545551 kubelet[2172]: I0702 00:55:11.543726 2172 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:55:11.545551 kubelet[2172]: I0702 00:55:11.543750 2172 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:55:11.545551 kubelet[2172]: I0702 00:55:11.543908 2172 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:55:11.545551 kubelet[2172]: I0702 00:55:11.545317 2172 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:55:11.546261 kubelet[2172]: I0702 00:55:11.546241 2172 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:55:11.550130 kubelet[2172]: W0702 00:55:11.550116 2172 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 00:55:11.550838 kubelet[2172]: I0702 00:55:11.550825 2172 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:55:11.551199 kubelet[2172]: I0702 00:55:11.551186 2172 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:55:11.551346 kubelet[2172]: I0702 00:55:11.551328 2172 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:55:11.551422 kubelet[2172]: I0702 00:55:11.551359 2172 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:55:11.551422 kubelet[2172]: I0702 00:55:11.551367 2172 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:55:11.551422 kubelet[2172]: I0702 00:55:11.551399 2172 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:55:11.551492 kubelet[2172]: I0702 00:55:11.551469 2172 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:55:11.551492 kubelet[2172]: I0702 00:55:11.551485 2172 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:55:11.551546 kubelet[2172]: I0702 00:55:11.551506 2172 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:55:11.551546 kubelet[2172]: I0702 00:55:11.551515 2172 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:55:11.553875 kubelet[2172]: I0702 00:55:11.553849 2172 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 00:55:11.554316 kubelet[2172]: I0702 00:55:11.554297 2172 server.go:1232] "Started kubelet" Jul 2 00:55:11.555350 kubelet[2172]: I0702 00:55:11.555331 2172 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:55:11.555863 kubelet[2172]: I0702 00:55:11.555832 2172 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:55:11.555989 kubelet[2172]: I0702 00:55:11.555978 2172 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:55:11.556182 kubelet[2172]: I0702 00:55:11.556153 2172 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:55:11.557131 kubelet[2172]: I0702 00:55:11.557109 2172 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:55:11.559292 kubelet[2172]: E0702 00:55:11.559267 2172 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:55:11.559403 kubelet[2172]: E0702 00:55:11.559392 2172 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:55:11.561553 kubelet[2172]: I0702 00:55:11.561523 2172 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:55:11.561874 kubelet[2172]: I0702 00:55:11.561853 2172 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:55:11.562185 kubelet[2172]: I0702 00:55:11.562168 2172 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:55:11.594283 kubelet[2172]: I0702 00:55:11.594238 2172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:55:11.595068 kubelet[2172]: I0702 00:55:11.595037 2172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:55:11.595068 kubelet[2172]: I0702 00:55:11.595061 2172 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:55:11.595170 kubelet[2172]: I0702 00:55:11.595130 2172 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:55:11.595197 kubelet[2172]: E0702 00:55:11.595179 2172 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:55:11.649213 kubelet[2172]: I0702 00:55:11.649125 2172 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:55:11.649347 kubelet[2172]: I0702 00:55:11.649334 2172 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:55:11.649412 kubelet[2172]: I0702 00:55:11.649403 2172 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:55:11.649631 kubelet[2172]: I0702 00:55:11.649617 2172 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:55:11.649721 kubelet[2172]: I0702 00:55:11.649710 2172 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:55:11.649777 kubelet[2172]: I0702 00:55:11.649769 2172 policy_none.go:49] "None policy: Start" Jul 2 00:55:11.650905 kubelet[2172]: I0702 00:55:11.650885 2172 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:55:11.651013 kubelet[2172]: I0702 00:55:11.651002 2172 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:55:11.651238 kubelet[2172]: I0702 00:55:11.651222 2172 state_mem.go:75] "Updated machine memory state" Jul 2 00:55:11.652312 kubelet[2172]: I0702 00:55:11.652290 2172 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:55:11.652604 kubelet[2172]: I0702 00:55:11.652586 2172 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:55:11.665422 kubelet[2172]: I0702 00:55:11.665405 2172 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:55:11.671808 kubelet[2172]: I0702 00:55:11.671758 2172 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 00:55:11.671894 kubelet[2172]: I0702 00:55:11.671826 2172 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:55:11.696243 kubelet[2172]: I0702 00:55:11.696211 2172 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:55:11.696335 kubelet[2172]: I0702 00:55:11.696312 2172 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:55:11.696363 kubelet[2172]: I0702 00:55:11.696345 2172 topology_manager.go:215] "Topology Admit Handler" podUID="1a5cfbbcbf128b888d509f7c92d652af" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:55:11.703828 kubelet[2172]: E0702 00:55:11.703473 2172 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 00:55:11.703828 kubelet[2172]: E0702 00:55:11.703649 2172 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:11.704009 kubelet[2172]: E0702 00:55:11.703988 2172 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 2 00:55:11.763288 kubelet[2172]: I0702 00:55:11.763256 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:11.763389 kubelet[2172]: I0702 00:55:11.763296 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:11.763389 kubelet[2172]: I0702 00:55:11.763329 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a5cfbbcbf128b888d509f7c92d652af-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a5cfbbcbf128b888d509f7c92d652af\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:55:11.763443 kubelet[2172]: I0702 00:55:11.763389 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:11.763469 kubelet[2172]: I0702 00:55:11.763434 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:11.763509 kubelet[2172]: I0702 00:55:11.763490 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:55:11.763554 kubelet[2172]: I0702 00:55:11.763542 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a5cfbbcbf128b888d509f7c92d652af-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a5cfbbcbf128b888d509f7c92d652af\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:55:11.763582 kubelet[2172]: I0702 00:55:11.763578 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a5cfbbcbf128b888d509f7c92d652af-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a5cfbbcbf128b888d509f7c92d652af\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:55:11.763605 kubelet[2172]: I0702 00:55:11.763599 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:55:11.998057 sudo[2185]: pam_unix(sudo:session): session closed for user root Jul 2 00:55:12.008721 kubelet[2172]: E0702 00:55:12.005732 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:12.009166 kubelet[2172]: E0702 00:55:12.009117 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:12.009341 kubelet[2172]: E0702 00:55:12.009328 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:12.551906 kubelet[2172]: I0702 00:55:12.551868 2172 apiserver.go:52] "Watching apiserver" Jul 2 00:55:12.562131 kubelet[2172]: I0702 00:55:12.562100 2172 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:55:12.609092 kubelet[2172]: E0702 00:55:12.609041 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:12.609863 kubelet[2172]: E0702 00:55:12.609837 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:12.610350 kubelet[2172]: E0702 00:55:12.610321 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:12.627785 kubelet[2172]: I0702 00:55:12.627670 2172 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.627101212 podCreationTimestamp="2024-07-02 00:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:55:12.626853492 +0000 UTC m=+1.149746601" watchObservedRunningTime="2024-07-02 00:55:12.627101212 +0000 UTC m=+1.149994281" Jul 2 00:55:12.640026 kubelet[2172]: I0702 00:55:12.639985 2172 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.6399447719999998 podCreationTimestamp="2024-07-02 00:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:55:12.633646372 +0000 UTC m=+1.156539481" watchObservedRunningTime="2024-07-02 00:55:12.639944772 +0000 UTC m=+1.162837881" Jul 2 00:55:12.640158 kubelet[2172]: I0702 00:55:12.640068 2172 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.640044412 podCreationTimestamp="2024-07-02 00:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:55:12.639795772 +0000 UTC m=+1.162688881" watchObservedRunningTime="2024-07-02 00:55:12.640044412 +0000 UTC m=+1.162937521" Jul 2 00:55:13.610818 kubelet[2172]: E0702 00:55:13.610780 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:13.753043 sudo[1440]: pam_unix(sudo:session): session closed for user root Jul 2 00:55:13.754750 sshd[1434]: pam_unix(sshd:session): session closed for user core Jul 2 00:55:13.756994 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:37104.service: Deactivated successfully. Jul 2 00:55:13.758967 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:55:13.758987 systemd-logind[1300]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:55:13.760011 systemd-logind[1300]: Removed session 5. Jul 2 00:55:15.386811 kubelet[2172]: E0702 00:55:15.386781 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:16.924407 kubelet[2172]: E0702 00:55:16.924361 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:21.714735 kubelet[2172]: E0702 00:55:21.714698 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:22.623399 kubelet[2172]: E0702 00:55:22.623367 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:24.856835 kubelet[2172]: I0702 00:55:24.856808 2172 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:55:24.857642 env[1312]: time="2024-07-02T00:55:24.857591210Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:55:24.857894 kubelet[2172]: I0702 00:55:24.857791 2172 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:55:25.395148 kubelet[2172]: E0702 00:55:25.395114 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:25.459305 kubelet[2172]: I0702 00:55:25.459263 2172 topology_manager.go:215] "Topology Admit Handler" podUID="2a2bb189-a399-4156-b324-476fc42c6985" podNamespace="kube-system" podName="kube-proxy-5g5l4" Jul 2 00:55:25.466130 kubelet[2172]: I0702 00:55:25.466046 2172 topology_manager.go:215] "Topology Admit Handler" podUID="13b6bc0e-2a93-4e07-8196-361dd52f1d82" podNamespace="kube-system" podName="cilium-jf7gt" Jul 2 00:55:25.564419 kubelet[2172]: I0702 00:55:25.564388 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/13b6bc0e-2a93-4e07-8196-361dd52f1d82-clustermesh-secrets\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.564638 kubelet[2172]: I0702 00:55:25.564625 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-bpf-maps\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.564727 kubelet[2172]: I0702 00:55:25.564716 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-etc-cni-netd\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.564802 kubelet[2172]: I0702 00:55:25.564792 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-xtables-lock\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.564888 kubelet[2172]: I0702 00:55:25.564878 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-run\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.564999 kubelet[2172]: I0702 00:55:25.564988 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/13b6bc0e-2a93-4e07-8196-361dd52f1d82-hubble-tls\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.565086 kubelet[2172]: I0702 00:55:25.565075 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a2bb189-a399-4156-b324-476fc42c6985-kube-proxy\") pod \"kube-proxy-5g5l4\" (UID: \"2a2bb189-a399-4156-b324-476fc42c6985\") " pod="kube-system/kube-proxy-5g5l4" Jul 2 00:55:25.565168 kubelet[2172]: I0702 00:55:25.565158 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rffbg\" (UniqueName: \"kubernetes.io/projected/13b6bc0e-2a93-4e07-8196-361dd52f1d82-kube-api-access-rffbg\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.565244 kubelet[2172]: I0702 00:55:25.565235 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a2bb189-a399-4156-b324-476fc42c6985-xtables-lock\") pod \"kube-proxy-5g5l4\" (UID: \"2a2bb189-a399-4156-b324-476fc42c6985\") " pod="kube-system/kube-proxy-5g5l4" Jul 2 00:55:25.565327 kubelet[2172]: I0702 00:55:25.565318 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-hostproc\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.565410 kubelet[2172]: I0702 00:55:25.565397 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-config-path\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.565494 kubelet[2172]: I0702 00:55:25.565484 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cni-path\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.565588 kubelet[2172]: I0702 00:55:25.565577 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-host-proc-sys-net\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.565677 kubelet[2172]: I0702 00:55:25.565666 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-host-proc-sys-kernel\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.565765 kubelet[2172]: I0702 00:55:25.565756 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-cgroup\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.565840 kubelet[2172]: I0702 00:55:25.565831 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-lib-modules\") pod \"cilium-jf7gt\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " pod="kube-system/cilium-jf7gt" Jul 2 00:55:25.565916 kubelet[2172]: I0702 00:55:25.565906 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a2bb189-a399-4156-b324-476fc42c6985-lib-modules\") pod \"kube-proxy-5g5l4\" (UID: \"2a2bb189-a399-4156-b324-476fc42c6985\") " pod="kube-system/kube-proxy-5g5l4" Jul 2 00:55:25.566010 kubelet[2172]: I0702 00:55:25.565999 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn659\" (UniqueName: \"kubernetes.io/projected/2a2bb189-a399-4156-b324-476fc42c6985-kube-api-access-pn659\") pod \"kube-proxy-5g5l4\" (UID: \"2a2bb189-a399-4156-b324-476fc42c6985\") " pod="kube-system/kube-proxy-5g5l4" Jul 2 00:55:25.763503 kubelet[2172]: E0702 00:55:25.762813 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:25.764579 env[1312]: time="2024-07-02T00:55:25.764514250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5g5l4,Uid:2a2bb189-a399-4156-b324-476fc42c6985,Namespace:kube-system,Attempt:0,}" Jul 2 00:55:25.768940 kubelet[2172]: E0702 00:55:25.768723 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:25.769483 env[1312]: time="2024-07-02T00:55:25.769449307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jf7gt,Uid:13b6bc0e-2a93-4e07-8196-361dd52f1d82,Namespace:kube-system,Attempt:0,}" Jul 2 00:55:25.783261 env[1312]: time="2024-07-02T00:55:25.783204402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:55:25.783384 env[1312]: time="2024-07-02T00:55:25.783275922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:55:25.783384 env[1312]: time="2024-07-02T00:55:25.783301522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:55:25.783582 env[1312]: time="2024-07-02T00:55:25.783507401Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6 pid=2269 runtime=io.containerd.runc.v2 Jul 2 00:55:25.783582 env[1312]: time="2024-07-02T00:55:25.783498721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:55:25.783582 env[1312]: time="2024-07-02T00:55:25.783566041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:55:25.783582 env[1312]: time="2024-07-02T00:55:25.783580721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:55:25.784074 env[1312]: time="2024-07-02T00:55:25.784036478Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/edff8d5ee7f2c17f008df1895858a3aeae6f61597235b0239dc13238acd3a257 pid=2273 runtime=io.containerd.runc.v2 Jul 2 00:55:25.834825 env[1312]: time="2024-07-02T00:55:25.834783479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5g5l4,Uid:2a2bb189-a399-4156-b324-476fc42c6985,Namespace:kube-system,Attempt:0,} returns sandbox id \"edff8d5ee7f2c17f008df1895858a3aeae6f61597235b0239dc13238acd3a257\"" Jul 2 00:55:25.835714 env[1312]: time="2024-07-02T00:55:25.835686795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jf7gt,Uid:13b6bc0e-2a93-4e07-8196-361dd52f1d82,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\"" Jul 2 00:55:25.837121 kubelet[2172]: E0702 00:55:25.836774 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:25.838046 kubelet[2172]: E0702 00:55:25.837848 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:25.840262 env[1312]: time="2024-07-02T00:55:25.840208054Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:55:25.843928 env[1312]: time="2024-07-02T00:55:25.843885516Z" level=info msg="CreateContainer within sandbox \"edff8d5ee7f2c17f008df1895858a3aeae6f61597235b0239dc13238acd3a257\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:55:25.856907 env[1312]: time="2024-07-02T00:55:25.856860415Z" level=info msg="CreateContainer within sandbox \"edff8d5ee7f2c17f008df1895858a3aeae6f61597235b0239dc13238acd3a257\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2b5df6c19fd11c64e9858c6ba7310732e94325529f4a42026bf0758384749eca\"" Jul 2 00:55:25.858587 env[1312]: time="2024-07-02T00:55:25.857681171Z" level=info msg="StartContainer for \"2b5df6c19fd11c64e9858c6ba7310732e94325529f4a42026bf0758384749eca\"" Jul 2 00:55:25.918667 kubelet[2172]: I0702 00:55:25.918626 2172 topology_manager.go:215] "Topology Admit Handler" podUID="248e6805-4f57-4243-b0c5-d33100cc81c6" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-25zwt" Jul 2 00:55:25.959995 env[1312]: time="2024-07-02T00:55:25.959951769Z" level=info msg="StartContainer for \"2b5df6c19fd11c64e9858c6ba7310732e94325529f4a42026bf0758384749eca\" returns successfully" Jul 2 00:55:25.968386 kubelet[2172]: I0702 00:55:25.968329 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/248e6805-4f57-4243-b0c5-d33100cc81c6-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-25zwt\" (UID: \"248e6805-4f57-4243-b0c5-d33100cc81c6\") " pod="kube-system/cilium-operator-6bc8ccdb58-25zwt" Jul 2 00:55:25.968474 kubelet[2172]: I0702 00:55:25.968408 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw8vs\" (UniqueName: \"kubernetes.io/projected/248e6805-4f57-4243-b0c5-d33100cc81c6-kube-api-access-zw8vs\") pod \"cilium-operator-6bc8ccdb58-25zwt\" (UID: \"248e6805-4f57-4243-b0c5-d33100cc81c6\") " pod="kube-system/cilium-operator-6bc8ccdb58-25zwt" Jul 2 00:55:26.230312 kubelet[2172]: E0702 00:55:26.230283 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:26.230970 env[1312]: time="2024-07-02T00:55:26.230928040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-25zwt,Uid:248e6805-4f57-4243-b0c5-d33100cc81c6,Namespace:kube-system,Attempt:0,}" Jul 2 00:55:26.245724 env[1312]: time="2024-07-02T00:55:26.245647455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:55:26.245724 env[1312]: time="2024-07-02T00:55:26.245686775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:55:26.245724 env[1312]: time="2024-07-02T00:55:26.245696815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:55:26.245889 env[1312]: time="2024-07-02T00:55:26.245837654Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc48c42dbd815e5acc768016f0a1fabcd5ff1d6c4c27b419e27ec79231156996 pid=2460 runtime=io.containerd.runc.v2 Jul 2 00:55:26.300373 env[1312]: time="2024-07-02T00:55:26.300328933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-25zwt,Uid:248e6805-4f57-4243-b0c5-d33100cc81c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc48c42dbd815e5acc768016f0a1fabcd5ff1d6c4c27b419e27ec79231156996\"" Jul 2 00:55:26.301336 kubelet[2172]: E0702 00:55:26.301113 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:26.638763 kubelet[2172]: E0702 00:55:26.638737 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:26.938775 kubelet[2172]: E0702 00:55:26.938581 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:26.951979 kubelet[2172]: I0702 00:55:26.951938 2172 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5g5l4" podStartSLOduration=1.951873215 podCreationTimestamp="2024-07-02 00:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:55:26.648327756 +0000 UTC m=+15.171220825" watchObservedRunningTime="2024-07-02 00:55:26.951873215 +0000 UTC m=+15.474766324" Jul 2 00:55:27.639177 kubelet[2172]: E0702 00:55:27.639152 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:28.522175 update_engine[1303]: I0702 00:55:28.522128 1303 update_attempter.cc:509] Updating boot flags... Jul 2 00:55:29.649223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592190502.mount: Deactivated successfully. Jul 2 00:55:31.897821 env[1312]: time="2024-07-02T00:55:31.897775890Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:31.899940 env[1312]: time="2024-07-02T00:55:31.899898043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:31.901514 env[1312]: time="2024-07-02T00:55:31.901490598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:31.902041 env[1312]: time="2024-07-02T00:55:31.902002437Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 00:55:31.902616 env[1312]: time="2024-07-02T00:55:31.902588715Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:55:31.904639 env[1312]: time="2024-07-02T00:55:31.904597748Z" level=info msg="CreateContainer within sandbox \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:55:31.915522 env[1312]: time="2024-07-02T00:55:31.915479033Z" level=info msg="CreateContainer within sandbox \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90\"" Jul 2 00:55:31.916635 env[1312]: time="2024-07-02T00:55:31.916589470Z" level=info msg="StartContainer for \"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90\"" Jul 2 00:55:32.026295 env[1312]: time="2024-07-02T00:55:32.026243924Z" level=info msg="StartContainer for \"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90\" returns successfully" Jul 2 00:55:32.041183 env[1312]: time="2024-07-02T00:55:32.041141759Z" level=info msg="shim disconnected" id=a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90 Jul 2 00:55:32.041384 env[1312]: time="2024-07-02T00:55:32.041366639Z" level=warning msg="cleaning up after shim disconnected" id=a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90 namespace=k8s.io Jul 2 00:55:32.041442 env[1312]: time="2024-07-02T00:55:32.041428399Z" level=info msg="cleaning up dead shim" Jul 2 00:55:32.051756 env[1312]: time="2024-07-02T00:55:32.051713008Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:55:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2601 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T00:55:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Jul 2 00:55:32.655955 kubelet[2172]: E0702 00:55:32.655923 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:32.660344 env[1312]: time="2024-07-02T00:55:32.660285542Z" level=info msg="CreateContainer within sandbox \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:55:32.673277 env[1312]: time="2024-07-02T00:55:32.673240183Z" level=info msg="CreateContainer within sandbox \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e\"" Jul 2 00:55:32.674983 env[1312]: time="2024-07-02T00:55:32.674652299Z" level=info msg="StartContainer for \"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e\"" Jul 2 00:55:32.726615 env[1312]: time="2024-07-02T00:55:32.726288904Z" level=info msg="StartContainer for \"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e\" returns successfully" Jul 2 00:55:32.754945 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:55:32.755214 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:55:32.755758 systemd[1]: Stopping systemd-sysctl.service... Jul 2 00:55:32.757291 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:55:32.768508 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:55:32.791926 env[1312]: time="2024-07-02T00:55:32.791861188Z" level=info msg="shim disconnected" id=3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e Jul 2 00:55:32.791926 env[1312]: time="2024-07-02T00:55:32.791922267Z" level=warning msg="cleaning up after shim disconnected" id=3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e namespace=k8s.io Jul 2 00:55:32.791926 env[1312]: time="2024-07-02T00:55:32.791932587Z" level=info msg="cleaning up dead shim" Jul 2 00:55:32.799889 env[1312]: time="2024-07-02T00:55:32.799827964Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:55:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2666 runtime=io.containerd.runc.v2\n" Jul 2 00:55:32.913250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90-rootfs.mount: Deactivated successfully. Jul 2 00:55:33.216752 env[1312]: time="2024-07-02T00:55:33.216552674Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:33.218038 env[1312]: time="2024-07-02T00:55:33.217995870Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:33.219280 env[1312]: time="2024-07-02T00:55:33.219253146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:55:33.219745 env[1312]: time="2024-07-02T00:55:33.219718065Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 00:55:33.221780 env[1312]: time="2024-07-02T00:55:33.221742139Z" level=info msg="CreateContainer within sandbox \"dc48c42dbd815e5acc768016f0a1fabcd5ff1d6c4c27b419e27ec79231156996\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:55:33.229585 env[1312]: time="2024-07-02T00:55:33.229550317Z" level=info msg="CreateContainer within sandbox \"dc48c42dbd815e5acc768016f0a1fabcd5ff1d6c4c27b419e27ec79231156996\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\"" Jul 2 00:55:33.230268 env[1312]: time="2024-07-02T00:55:33.230204916Z" level=info msg="StartContainer for \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\"" Jul 2 00:55:33.285294 env[1312]: time="2024-07-02T00:55:33.285254681Z" level=info msg="StartContainer for \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\" returns successfully" Jul 2 00:55:33.654993 kubelet[2172]: E0702 00:55:33.654954 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:33.656338 kubelet[2172]: E0702 00:55:33.656299 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:33.658690 env[1312]: time="2024-07-02T00:55:33.658654551Z" level=info msg="CreateContainer within sandbox \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:55:33.664009 kubelet[2172]: I0702 00:55:33.663983 2172 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-25zwt" podStartSLOduration=1.747127752 podCreationTimestamp="2024-07-02 00:55:25 +0000 UTC" firstStartedPulling="2024-07-02 00:55:26.303108881 +0000 UTC m=+14.826001990" lastFinishedPulling="2024-07-02 00:55:33.219923065 +0000 UTC m=+21.742816174" observedRunningTime="2024-07-02 00:55:33.663041098 +0000 UTC m=+22.185934207" watchObservedRunningTime="2024-07-02 00:55:33.663941936 +0000 UTC m=+22.186835085" Jul 2 00:55:33.672446 env[1312]: time="2024-07-02T00:55:33.672401152Z" level=info msg="CreateContainer within sandbox \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2\"" Jul 2 00:55:33.673031 env[1312]: time="2024-07-02T00:55:33.672990071Z" level=info msg="StartContainer for \"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2\"" Jul 2 00:55:33.767592 env[1312]: time="2024-07-02T00:55:33.767546965Z" level=info msg="StartContainer for \"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2\" returns successfully" Jul 2 00:55:33.874149 env[1312]: time="2024-07-02T00:55:33.874106265Z" level=info msg="shim disconnected" id=2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2 Jul 2 00:55:33.874399 env[1312]: time="2024-07-02T00:55:33.874378784Z" level=warning msg="cleaning up after shim disconnected" id=2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2 namespace=k8s.io Jul 2 00:55:33.874484 env[1312]: time="2024-07-02T00:55:33.874470664Z" level=info msg="cleaning up dead shim" Jul 2 00:55:33.908777 env[1312]: time="2024-07-02T00:55:33.908683648Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:55:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2760 runtime=io.containerd.runc.v2\n" Jul 2 00:55:33.913202 systemd[1]: run-containerd-runc-k8s.io-7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7-runc.pAwaWu.mount: Deactivated successfully. Jul 2 00:55:34.659996 kubelet[2172]: E0702 00:55:34.659966 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:34.660709 kubelet[2172]: E0702 00:55:34.660680 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:34.664068 env[1312]: time="2024-07-02T00:55:34.664018760Z" level=info msg="CreateContainer within sandbox \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:55:34.675617 env[1312]: time="2024-07-02T00:55:34.675554810Z" level=info msg="CreateContainer within sandbox \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514\"" Jul 2 00:55:34.676054 env[1312]: time="2024-07-02T00:55:34.676025248Z" level=info msg="StartContainer for \"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514\"" Jul 2 00:55:34.743615 env[1312]: time="2024-07-02T00:55:34.743566070Z" level=info msg="StartContainer for \"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514\" returns successfully" Jul 2 00:55:34.761629 env[1312]: time="2024-07-02T00:55:34.761586983Z" level=info msg="shim disconnected" id=60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514 Jul 2 00:55:34.761908 env[1312]: time="2024-07-02T00:55:34.761857022Z" level=warning msg="cleaning up after shim disconnected" id=60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514 namespace=k8s.io Jul 2 00:55:34.762000 env[1312]: time="2024-07-02T00:55:34.761985302Z" level=info msg="cleaning up dead shim" Jul 2 00:55:34.768339 env[1312]: time="2024-07-02T00:55:34.768311685Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:55:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2814 runtime=io.containerd.runc.v2\n" Jul 2 00:55:34.913320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514-rootfs.mount: Deactivated successfully. Jul 2 00:55:35.663818 kubelet[2172]: E0702 00:55:35.663790 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:35.666306 env[1312]: time="2024-07-02T00:55:35.666260107Z" level=info msg="CreateContainer within sandbox \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:55:35.680977 env[1312]: time="2024-07-02T00:55:35.680928151Z" level=info msg="CreateContainer within sandbox \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\"" Jul 2 00:55:35.681438 env[1312]: time="2024-07-02T00:55:35.681412350Z" level=info msg="StartContainer for \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\"" Jul 2 00:55:35.762830 env[1312]: time="2024-07-02T00:55:35.762780549Z" level=info msg="StartContainer for \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\" returns successfully" Jul 2 00:55:35.843376 kubelet[2172]: I0702 00:55:35.843334 2172 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:55:35.877037 kubelet[2172]: I0702 00:55:35.876987 2172 topology_manager.go:215] "Topology Admit Handler" podUID="3c86c7d4-9304-4264-9be9-6deb15db360b" podNamespace="kube-system" podName="coredns-5dd5756b68-mf5p8" Jul 2 00:55:35.879686 kubelet[2172]: I0702 00:55:35.879652 2172 topology_manager.go:215] "Topology Admit Handler" podUID="32fec203-8519-46da-b6ae-45bb710328f3" podNamespace="kube-system" podName="coredns-5dd5756b68-hj8nd" Jul 2 00:55:36.001567 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:55:36.050133 kubelet[2172]: I0702 00:55:36.050094 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32fec203-8519-46da-b6ae-45bb710328f3-config-volume\") pod \"coredns-5dd5756b68-hj8nd\" (UID: \"32fec203-8519-46da-b6ae-45bb710328f3\") " pod="kube-system/coredns-5dd5756b68-hj8nd" Jul 2 00:55:36.050133 kubelet[2172]: I0702 00:55:36.050141 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqnqj\" (UniqueName: \"kubernetes.io/projected/32fec203-8519-46da-b6ae-45bb710328f3-kube-api-access-jqnqj\") pod \"coredns-5dd5756b68-hj8nd\" (UID: \"32fec203-8519-46da-b6ae-45bb710328f3\") " pod="kube-system/coredns-5dd5756b68-hj8nd" Jul 2 00:55:36.050348 kubelet[2172]: I0702 00:55:36.050239 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c86c7d4-9304-4264-9be9-6deb15db360b-config-volume\") pod \"coredns-5dd5756b68-mf5p8\" (UID: \"3c86c7d4-9304-4264-9be9-6deb15db360b\") " pod="kube-system/coredns-5dd5756b68-mf5p8" Jul 2 00:55:36.050348 kubelet[2172]: I0702 00:55:36.050282 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmv7\" (UniqueName: \"kubernetes.io/projected/3c86c7d4-9304-4264-9be9-6deb15db360b-kube-api-access-brmv7\") pod \"coredns-5dd5756b68-mf5p8\" (UID: \"3c86c7d4-9304-4264-9be9-6deb15db360b\") " pod="kube-system/coredns-5dd5756b68-mf5p8" Jul 2 00:55:36.194963 kubelet[2172]: E0702 00:55:36.194927 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:36.195446 kubelet[2172]: E0702 00:55:36.195396 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:36.196220 env[1312]: time="2024-07-02T00:55:36.195691709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mf5p8,Uid:3c86c7d4-9304-4264-9be9-6deb15db360b,Namespace:kube-system,Attempt:0,}" Jul 2 00:55:36.196220 env[1312]: time="2024-07-02T00:55:36.195849989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hj8nd,Uid:32fec203-8519-46da-b6ae-45bb710328f3,Namespace:kube-system,Attempt:0,}" Jul 2 00:55:36.252608 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:55:36.668287 kubelet[2172]: E0702 00:55:36.667930 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:36.681299 kubelet[2172]: I0702 00:55:36.680977 2172 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jf7gt" podStartSLOduration=5.617136731 podCreationTimestamp="2024-07-02 00:55:25 +0000 UTC" firstStartedPulling="2024-07-02 00:55:25.838614781 +0000 UTC m=+14.361507890" lastFinishedPulling="2024-07-02 00:55:31.902407555 +0000 UTC m=+20.425300664" observedRunningTime="2024-07-02 00:55:36.680208986 +0000 UTC m=+25.203102095" watchObservedRunningTime="2024-07-02 00:55:36.680929505 +0000 UTC m=+25.203822614" Jul 2 00:55:37.669653 kubelet[2172]: E0702 00:55:37.669621 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:37.866464 systemd-networkd[1093]: cilium_host: Link UP Jul 2 00:55:37.866779 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 00:55:37.866808 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 00:55:37.866945 systemd-networkd[1093]: cilium_net: Link UP Jul 2 00:55:37.867092 systemd-networkd[1093]: cilium_net: Gained carrier Jul 2 00:55:37.867209 systemd-networkd[1093]: cilium_host: Gained carrier Jul 2 00:55:37.941933 systemd-networkd[1093]: cilium_vxlan: Link UP Jul 2 00:55:37.941941 systemd-networkd[1093]: cilium_vxlan: Gained carrier Jul 2 00:55:38.247561 kernel: NET: Registered PF_ALG protocol family Jul 2 00:55:38.475697 systemd-networkd[1093]: cilium_host: Gained IPv6LL Jul 2 00:55:38.671544 kubelet[2172]: E0702 00:55:38.671496 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:38.729657 systemd-networkd[1093]: cilium_net: Gained IPv6LL Jul 2 00:55:38.820707 systemd-networkd[1093]: lxc_health: Link UP Jul 2 00:55:38.833624 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:55:38.836424 systemd-networkd[1093]: lxc_health: Gained carrier Jul 2 00:55:39.265644 systemd-networkd[1093]: lxc1de79f24ef02: Link UP Jul 2 00:55:39.281319 systemd-networkd[1093]: lxca0ebb5f2b950: Link UP Jul 2 00:55:39.292561 kernel: eth0: renamed from tmp96085 Jul 2 00:55:39.297553 kernel: eth0: renamed from tmp91715 Jul 2 00:55:39.305464 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 00:55:39.305542 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1de79f24ef02: link becomes ready Jul 2 00:55:39.305569 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 00:55:39.305305 systemd-networkd[1093]: lxc1de79f24ef02: Gained carrier Jul 2 00:55:39.306738 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca0ebb5f2b950: link becomes ready Jul 2 00:55:39.306602 systemd-networkd[1093]: lxca0ebb5f2b950: Gained carrier Jul 2 00:55:39.345485 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:49102.service. Jul 2 00:55:39.392924 sshd[3351]: Accepted publickey for core from 10.0.0.1 port 49102 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:55:39.394800 sshd[3351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:55:39.399243 systemd[1]: Started session-6.scope. Jul 2 00:55:39.399960 systemd-logind[1300]: New session 6 of user core. Jul 2 00:55:39.568951 sshd[3351]: pam_unix(sshd:session): session closed for user core Jul 2 00:55:39.571992 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:49102.service: Deactivated successfully. Jul 2 00:55:39.573164 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:55:39.573190 systemd-logind[1300]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:55:39.574219 systemd-logind[1300]: Removed session 6. Jul 2 00:55:39.772185 kubelet[2172]: E0702 00:55:39.772140 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:39.945905 systemd-networkd[1093]: cilium_vxlan: Gained IPv6LL Jul 2 00:55:40.674641 kubelet[2172]: E0702 00:55:40.674600 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:40.713970 systemd-networkd[1093]: lxc_health: Gained IPv6LL Jul 2 00:55:41.162036 systemd-networkd[1093]: lxca0ebb5f2b950: Gained IPv6LL Jul 2 00:55:41.289966 systemd-networkd[1093]: lxc1de79f24ef02: Gained IPv6LL Jul 2 00:55:41.676223 kubelet[2172]: E0702 00:55:41.676174 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:42.894024 env[1312]: time="2024-07-02T00:55:42.893952172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:55:42.894024 env[1312]: time="2024-07-02T00:55:42.894292892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:55:42.894024 env[1312]: time="2024-07-02T00:55:42.894333532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:55:42.894024 env[1312]: time="2024-07-02T00:55:42.894346012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:55:42.894024 env[1312]: time="2024-07-02T00:55:42.894490171Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/91715a1a4ad989cdf1339dd7006636337d57d55e189e6208e97329600994f38d pid=3402 runtime=io.containerd.runc.v2 Jul 2 00:55:42.906941 env[1312]: time="2024-07-02T00:55:42.906393953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:55:42.906941 env[1312]: time="2024-07-02T00:55:42.906423273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:55:42.907754 env[1312]: time="2024-07-02T00:55:42.907063832Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96085d8e83e8afdc14c9674685d6b8b8bc4321346a43d7b3a4390140a172d693 pid=3401 runtime=io.containerd.runc.v2 Jul 2 00:55:42.953169 systemd-resolved[1232]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:55:42.957553 systemd-resolved[1232]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:55:42.976109 env[1312]: time="2024-07-02T00:55:42.976052323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mf5p8,Uid:3c86c7d4-9304-4264-9be9-6deb15db360b,Namespace:kube-system,Attempt:0,} returns sandbox id \"91715a1a4ad989cdf1339dd7006636337d57d55e189e6208e97329600994f38d\"" Jul 2 00:55:42.976687 kubelet[2172]: E0702 00:55:42.976666 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:42.979957 env[1312]: time="2024-07-02T00:55:42.979897237Z" level=info msg="CreateContainer within sandbox \"91715a1a4ad989cdf1339dd7006636337d57d55e189e6208e97329600994f38d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:55:42.980748 env[1312]: time="2024-07-02T00:55:42.980695356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hj8nd,Uid:32fec203-8519-46da-b6ae-45bb710328f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"96085d8e83e8afdc14c9674685d6b8b8bc4321346a43d7b3a4390140a172d693\"" Jul 2 00:55:42.981351 kubelet[2172]: E0702 00:55:42.981331 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:42.983814 env[1312]: time="2024-07-02T00:55:42.983452231Z" level=info msg="CreateContainer within sandbox \"96085d8e83e8afdc14c9674685d6b8b8bc4321346a43d7b3a4390140a172d693\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:55:42.999434 env[1312]: time="2024-07-02T00:55:42.999396486Z" level=info msg="CreateContainer within sandbox \"91715a1a4ad989cdf1339dd7006636337d57d55e189e6208e97329600994f38d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83c375b010269111447c9dbf6c17b14b0a8d66dff565cdff977ee5dc0030e344\"" Jul 2 00:55:43.000008 env[1312]: time="2024-07-02T00:55:42.999980125Z" level=info msg="StartContainer for \"83c375b010269111447c9dbf6c17b14b0a8d66dff565cdff977ee5dc0030e344\"" Jul 2 00:55:43.000864 env[1312]: time="2024-07-02T00:55:43.000824004Z" level=info msg="CreateContainer within sandbox \"96085d8e83e8afdc14c9674685d6b8b8bc4321346a43d7b3a4390140a172d693\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e47a35ea691713b41b895b981b4f449af88666025242d761eb03057949e18688\"" Jul 2 00:55:43.001689 env[1312]: time="2024-07-02T00:55:43.001491043Z" level=info msg="StartContainer for \"e47a35ea691713b41b895b981b4f449af88666025242d761eb03057949e18688\"" Jul 2 00:55:43.070385 env[1312]: time="2024-07-02T00:55:43.070106462Z" level=info msg="StartContainer for \"83c375b010269111447c9dbf6c17b14b0a8d66dff565cdff977ee5dc0030e344\" returns successfully" Jul 2 00:55:43.083312 env[1312]: time="2024-07-02T00:55:43.079883007Z" level=info msg="StartContainer for \"e47a35ea691713b41b895b981b4f449af88666025242d761eb03057949e18688\" returns successfully" Jul 2 00:55:43.694288 kubelet[2172]: E0702 00:55:43.694243 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:43.696572 kubelet[2172]: E0702 00:55:43.696508 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:43.719258 kubelet[2172]: I0702 00:55:43.718926 2172 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mf5p8" podStartSLOduration=18.718893105 podCreationTimestamp="2024-07-02 00:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:55:43.718211146 +0000 UTC m=+32.241104215" watchObservedRunningTime="2024-07-02 00:55:43.718893105 +0000 UTC m=+32.241786214" Jul 2 00:55:43.719258 kubelet[2172]: I0702 00:55:43.719237 2172 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hj8nd" podStartSLOduration=18.719215345 podCreationTimestamp="2024-07-02 00:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:55:43.706577483 +0000 UTC m=+32.229470592" watchObservedRunningTime="2024-07-02 00:55:43.719215345 +0000 UTC m=+32.242108454" Jul 2 00:55:44.571902 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:37084.service. Jul 2 00:55:44.618160 sshd[3546]: Accepted publickey for core from 10.0.0.1 port 37084 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:55:44.619959 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:55:44.623518 systemd-logind[1300]: New session 7 of user core. Jul 2 00:55:44.624431 systemd[1]: Started session-7.scope. Jul 2 00:55:44.698003 kubelet[2172]: E0702 00:55:44.697974 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:44.698986 kubelet[2172]: E0702 00:55:44.698967 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:44.731948 sshd[3546]: pam_unix(sshd:session): session closed for user core Jul 2 00:55:44.734645 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:37084.service: Deactivated successfully. Jul 2 00:55:44.735445 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:55:44.736345 systemd-logind[1300]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:55:44.737069 systemd-logind[1300]: Removed session 7. Jul 2 00:55:45.700206 kubelet[2172]: E0702 00:55:45.699687 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:45.700206 kubelet[2172]: E0702 00:55:45.700146 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:55:49.735321 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:37090.service. Jul 2 00:55:49.779234 sshd[3566]: Accepted publickey for core from 10.0.0.1 port 37090 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:55:49.780573 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:55:49.784018 systemd-logind[1300]: New session 8 of user core. Jul 2 00:55:49.784888 systemd[1]: Started session-8.scope. Jul 2 00:55:49.900104 sshd[3566]: pam_unix(sshd:session): session closed for user core Jul 2 00:55:49.902730 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:37092.service. Jul 2 00:55:49.904623 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:37090.service: Deactivated successfully. Jul 2 00:55:49.905656 systemd-logind[1300]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:55:49.905701 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:55:49.906578 systemd-logind[1300]: Removed session 8. Jul 2 00:55:49.947470 sshd[3579]: Accepted publickey for core from 10.0.0.1 port 37092 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:55:49.949262 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:55:49.952695 systemd-logind[1300]: New session 9 of user core. Jul 2 00:55:49.953599 systemd[1]: Started session-9.scope. Jul 2 00:55:50.624102 sshd[3579]: pam_unix(sshd:session): session closed for user core Jul 2 00:55:50.627283 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:47874.service. Jul 2 00:55:50.635268 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:37092.service: Deactivated successfully. Jul 2 00:55:50.637675 systemd-logind[1300]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:55:50.637715 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:55:50.639442 systemd-logind[1300]: Removed session 9. Jul 2 00:55:50.676668 sshd[3591]: Accepted publickey for core from 10.0.0.1 port 47874 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:55:50.677994 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:55:50.681451 systemd-logind[1300]: New session 10 of user core. Jul 2 00:55:50.682269 systemd[1]: Started session-10.scope. Jul 2 00:55:50.795674 sshd[3591]: pam_unix(sshd:session): session closed for user core Jul 2 00:55:50.798456 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:47874.service: Deactivated successfully. Jul 2 00:55:50.799457 systemd-logind[1300]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:55:50.799497 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:55:50.800192 systemd-logind[1300]: Removed session 10. Jul 2 00:55:55.809307 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:47884.service. Jul 2 00:55:55.851724 sshd[3607]: Accepted publickey for core from 10.0.0.1 port 47884 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:55:55.853248 sshd[3607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:55:55.856316 systemd-logind[1300]: New session 11 of user core. Jul 2 00:55:55.858077 systemd[1]: Started session-11.scope. Jul 2 00:55:55.960733 sshd[3607]: pam_unix(sshd:session): session closed for user core Jul 2 00:55:55.963073 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:47884.service: Deactivated successfully. Jul 2 00:55:55.964109 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:55:55.964463 systemd-logind[1300]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:55:55.965198 systemd-logind[1300]: Removed session 11. Jul 2 00:56:00.964170 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:59064.service. Jul 2 00:56:01.006640 sshd[3625]: Accepted publickey for core from 10.0.0.1 port 59064 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:01.007955 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:01.011909 systemd-logind[1300]: New session 12 of user core. Jul 2 00:56:01.012382 systemd[1]: Started session-12.scope. Jul 2 00:56:01.125335 sshd[3625]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:01.129591 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:59076.service. Jul 2 00:56:01.130063 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:59064.service: Deactivated successfully. Jul 2 00:56:01.131905 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:56:01.132479 systemd-logind[1300]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:56:01.133695 systemd-logind[1300]: Removed session 12. Jul 2 00:56:01.175247 sshd[3637]: Accepted publickey for core from 10.0.0.1 port 59076 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:01.176428 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:01.180932 systemd-logind[1300]: New session 13 of user core. Jul 2 00:56:01.182724 systemd[1]: Started session-13.scope. Jul 2 00:56:01.399723 sshd[3637]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:01.400999 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:59086.service. Jul 2 00:56:01.404044 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:59076.service: Deactivated successfully. Jul 2 00:56:01.405015 systemd-logind[1300]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:56:01.405090 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:56:01.406229 systemd-logind[1300]: Removed session 13. Jul 2 00:56:01.447906 sshd[3649]: Accepted publickey for core from 10.0.0.1 port 59086 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:01.449164 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:01.455660 systemd-logind[1300]: New session 14 of user core. Jul 2 00:56:01.456830 systemd[1]: Started session-14.scope. Jul 2 00:56:02.209633 sshd[3649]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:02.210925 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:59092.service. Jul 2 00:56:02.213409 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:59086.service: Deactivated successfully. Jul 2 00:56:02.214724 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:56:02.214747 systemd-logind[1300]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:56:02.216021 systemd-logind[1300]: Removed session 14. Jul 2 00:56:02.253765 sshd[3668]: Accepted publickey for core from 10.0.0.1 port 59092 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:02.255021 sshd[3668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:02.258175 systemd-logind[1300]: New session 15 of user core. Jul 2 00:56:02.258930 systemd[1]: Started session-15.scope. Jul 2 00:56:02.519982 sshd[3668]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:02.521850 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:59096.service. Jul 2 00:56:02.526149 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:59092.service: Deactivated successfully. Jul 2 00:56:02.526969 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:56:02.528781 systemd-logind[1300]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:56:02.529632 systemd-logind[1300]: Removed session 15. Jul 2 00:56:02.571615 sshd[3681]: Accepted publickey for core from 10.0.0.1 port 59096 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:02.573193 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:02.576861 systemd-logind[1300]: New session 16 of user core. Jul 2 00:56:02.577878 systemd[1]: Started session-16.scope. Jul 2 00:56:02.694071 sshd[3681]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:02.696518 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:59096.service: Deactivated successfully. Jul 2 00:56:02.697467 systemd-logind[1300]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:56:02.697581 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:56:02.698802 systemd-logind[1300]: Removed session 16. Jul 2 00:56:07.697407 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:59104.service. Jul 2 00:56:07.740391 sshd[3700]: Accepted publickey for core from 10.0.0.1 port 59104 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:07.741662 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:07.745840 systemd-logind[1300]: New session 17 of user core. Jul 2 00:56:07.746657 systemd[1]: Started session-17.scope. Jul 2 00:56:07.855281 sshd[3700]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:07.857861 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:59104.service: Deactivated successfully. Jul 2 00:56:07.858853 systemd-logind[1300]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:56:07.858916 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:56:07.860056 systemd-logind[1300]: Removed session 17. Jul 2 00:56:12.858235 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:59808.service. Jul 2 00:56:12.901360 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 59808 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:12.902728 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:12.907290 systemd[1]: Started session-18.scope. Jul 2 00:56:12.907624 systemd-logind[1300]: New session 18 of user core. Jul 2 00:56:13.021426 sshd[3717]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:13.023754 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:59808.service: Deactivated successfully. Jul 2 00:56:13.024789 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:56:13.024790 systemd-logind[1300]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:56:13.025682 systemd-logind[1300]: Removed session 18. Jul 2 00:56:18.024319 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:59812.service. Jul 2 00:56:18.067393 sshd[3731]: Accepted publickey for core from 10.0.0.1 port 59812 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:18.069084 sshd[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:18.073197 systemd-logind[1300]: New session 19 of user core. Jul 2 00:56:18.073702 systemd[1]: Started session-19.scope. Jul 2 00:56:18.182602 sshd[3731]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:18.185276 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:59812.service: Deactivated successfully. Jul 2 00:56:18.188671 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:56:18.188675 systemd-logind[1300]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:56:18.189975 systemd-logind[1300]: Removed session 19. Jul 2 00:56:23.185604 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:50586.service. Jul 2 00:56:23.228619 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 50586 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:23.230303 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:23.234875 systemd[1]: Started session-20.scope. Jul 2 00:56:23.235827 systemd-logind[1300]: New session 20 of user core. Jul 2 00:56:23.346126 sshd[3745]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:23.348513 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:50590.service. Jul 2 00:56:23.349162 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:50586.service: Deactivated successfully. Jul 2 00:56:23.350252 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:56:23.350398 systemd-logind[1300]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:56:23.351547 systemd-logind[1300]: Removed session 20. Jul 2 00:56:23.391259 sshd[3757]: Accepted publickey for core from 10.0.0.1 port 50590 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:23.392465 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:23.396287 systemd-logind[1300]: New session 21 of user core. Jul 2 00:56:23.396900 systemd[1]: Started session-21.scope. Jul 2 00:56:25.574837 env[1312]: time="2024-07-02T00:56:25.574782392Z" level=info msg="StopContainer for \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\" with timeout 30 (s)" Jul 2 00:56:25.575422 env[1312]: time="2024-07-02T00:56:25.575341634Z" level=info msg="Stop container \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\" with signal terminated" Jul 2 00:56:25.588222 systemd[1]: run-containerd-runc-k8s.io-602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9-runc.84nhYU.mount: Deactivated successfully. Jul 2 00:56:25.608810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7-rootfs.mount: Deactivated successfully. Jul 2 00:56:25.616943 env[1312]: time="2024-07-02T00:56:25.616513634Z" level=info msg="shim disconnected" id=7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7 Jul 2 00:56:25.616943 env[1312]: time="2024-07-02T00:56:25.616576594Z" level=warning msg="cleaning up after shim disconnected" id=7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7 namespace=k8s.io Jul 2 00:56:25.616943 env[1312]: time="2024-07-02T00:56:25.616586674Z" level=info msg="cleaning up dead shim" Jul 2 00:56:25.616943 env[1312]: time="2024-07-02T00:56:25.616884556Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:56:25.621924 env[1312]: time="2024-07-02T00:56:25.621891740Z" level=info msg="StopContainer for \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\" with timeout 2 (s)" Jul 2 00:56:25.622161 env[1312]: time="2024-07-02T00:56:25.622137461Z" level=info msg="Stop container \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\" with signal terminated" Jul 2 00:56:25.626355 env[1312]: time="2024-07-02T00:56:25.626309201Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:56:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3805 runtime=io.containerd.runc.v2\n" Jul 2 00:56:25.628423 systemd-networkd[1093]: lxc_health: Link DOWN Jul 2 00:56:25.628428 systemd-networkd[1093]: lxc_health: Lost carrier Jul 2 00:56:25.629239 env[1312]: time="2024-07-02T00:56:25.629184655Z" level=info msg="StopContainer for \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\" returns successfully" Jul 2 00:56:25.629814 env[1312]: time="2024-07-02T00:56:25.629786058Z" level=info msg="StopPodSandbox for \"dc48c42dbd815e5acc768016f0a1fabcd5ff1d6c4c27b419e27ec79231156996\"" Jul 2 00:56:25.629866 env[1312]: time="2024-07-02T00:56:25.629844298Z" level=info msg="Container to stop \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:56:25.631639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc48c42dbd815e5acc768016f0a1fabcd5ff1d6c4c27b419e27ec79231156996-shm.mount: Deactivated successfully. Jul 2 00:56:25.668048 env[1312]: time="2024-07-02T00:56:25.667987083Z" level=info msg="shim disconnected" id=dc48c42dbd815e5acc768016f0a1fabcd5ff1d6c4c27b419e27ec79231156996 Jul 2 00:56:25.668048 env[1312]: time="2024-07-02T00:56:25.668047404Z" level=warning msg="cleaning up after shim disconnected" id=dc48c42dbd815e5acc768016f0a1fabcd5ff1d6c4c27b419e27ec79231156996 namespace=k8s.io Jul 2 00:56:25.668253 env[1312]: time="2024-07-02T00:56:25.668056484Z" level=info msg="cleaning up dead shim" Jul 2 00:56:25.674395 env[1312]: time="2024-07-02T00:56:25.674355274Z" level=info msg="shim disconnected" id=602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9 Jul 2 00:56:25.674395 env[1312]: time="2024-07-02T00:56:25.674394594Z" level=warning msg="cleaning up after shim disconnected" id=602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9 namespace=k8s.io Jul 2 00:56:25.674930 env[1312]: time="2024-07-02T00:56:25.674403914Z" level=info msg="cleaning up dead shim" Jul 2 00:56:25.675937 env[1312]: time="2024-07-02T00:56:25.675893202Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:56:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3862 runtime=io.containerd.runc.v2\n" Jul 2 00:56:25.676213 env[1312]: time="2024-07-02T00:56:25.676178723Z" level=info msg="TearDown network for sandbox \"dc48c42dbd815e5acc768016f0a1fabcd5ff1d6c4c27b419e27ec79231156996\" successfully" Jul 2 00:56:25.676213 env[1312]: time="2024-07-02T00:56:25.676205883Z" level=info msg="StopPodSandbox for \"dc48c42dbd815e5acc768016f0a1fabcd5ff1d6c4c27b419e27ec79231156996\" returns successfully" Jul 2 00:56:25.686078 env[1312]: time="2024-07-02T00:56:25.686026491Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:56:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3874 runtime=io.containerd.runc.v2\n" Jul 2 00:56:25.688539 env[1312]: time="2024-07-02T00:56:25.688497703Z" level=info msg="StopContainer for \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\" returns successfully" Jul 2 00:56:25.688917 env[1312]: time="2024-07-02T00:56:25.688891705Z" level=info msg="StopPodSandbox for \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\"" Jul 2 00:56:25.688968 env[1312]: time="2024-07-02T00:56:25.688951585Z" level=info msg="Container to stop \"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:56:25.689005 env[1312]: time="2024-07-02T00:56:25.688969545Z" level=info msg="Container to stop \"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:56:25.689005 env[1312]: time="2024-07-02T00:56:25.688986305Z" level=info msg="Container to stop \"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:56:25.689059 env[1312]: time="2024-07-02T00:56:25.688999225Z" level=info msg="Container to stop \"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:56:25.689059 env[1312]: time="2024-07-02T00:56:25.689014145Z" level=info msg="Container to stop \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:56:25.713930 env[1312]: time="2024-07-02T00:56:25.713871906Z" level=info msg="shim disconnected" id=1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6 Jul 2 00:56:25.713930 env[1312]: time="2024-07-02T00:56:25.713923946Z" level=warning msg="cleaning up after shim disconnected" id=1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6 namespace=k8s.io Jul 2 00:56:25.713930 env[1312]: time="2024-07-02T00:56:25.713934786Z" level=info msg="cleaning up dead shim" Jul 2 00:56:25.721016 env[1312]: time="2024-07-02T00:56:25.720974220Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:56:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3906 runtime=io.containerd.runc.v2\n" Jul 2 00:56:25.721306 env[1312]: time="2024-07-02T00:56:25.721281262Z" level=info msg="TearDown network for sandbox \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" successfully" Jul 2 00:56:25.721340 env[1312]: time="2024-07-02T00:56:25.721307102Z" level=info msg="StopPodSandbox for \"1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6\" returns successfully" Jul 2 00:56:25.773856 kubelet[2172]: I0702 00:56:25.773826 2172 scope.go:117] "RemoveContainer" containerID="7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7" Jul 2 00:56:25.776009 env[1312]: time="2024-07-02T00:56:25.775935207Z" level=info msg="RemoveContainer for \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\"" Jul 2 00:56:25.779716 env[1312]: time="2024-07-02T00:56:25.779687745Z" level=info msg="RemoveContainer for \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\" returns successfully" Jul 2 00:56:25.780103 kubelet[2172]: I0702 00:56:25.780082 2172 scope.go:117] "RemoveContainer" containerID="7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7" Jul 2 00:56:25.780745 env[1312]: time="2024-07-02T00:56:25.780629389Z" level=error msg="ContainerStatus for \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\": not found" Jul 2 00:56:25.781360 kubelet[2172]: E0702 00:56:25.781338 2172 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\": not found" containerID="7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7" Jul 2 00:56:25.781520 kubelet[2172]: I0702 00:56:25.781505 2172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7"} err="failed to get container status \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e04f842b2ba470e29219a26fb506beb96c167ec451842a72c96fd0728dfaaa7\": not found" Jul 2 00:56:25.781709 kubelet[2172]: I0702 00:56:25.781694 2172 scope.go:117] "RemoveContainer" containerID="602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9" Jul 2 00:56:25.782741 env[1312]: time="2024-07-02T00:56:25.782714359Z" level=info msg="RemoveContainer for \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\"" Jul 2 00:56:25.785387 env[1312]: time="2024-07-02T00:56:25.785351932Z" level=info msg="RemoveContainer for \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\" returns successfully" Jul 2 00:56:25.785569 kubelet[2172]: I0702 00:56:25.785551 2172 scope.go:117] "RemoveContainer" containerID="60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514" Jul 2 00:56:25.786687 env[1312]: time="2024-07-02T00:56:25.786662419Z" level=info msg="RemoveContainer for \"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514\"" Jul 2 00:56:25.788945 env[1312]: time="2024-07-02T00:56:25.788916669Z" level=info msg="RemoveContainer for \"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514\" returns successfully" Jul 2 00:56:25.789110 kubelet[2172]: I0702 00:56:25.789092 2172 scope.go:117] "RemoveContainer" containerID="2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2" Jul 2 00:56:25.790240 env[1312]: time="2024-07-02T00:56:25.790188996Z" level=info msg="RemoveContainer for \"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2\"" Jul 2 00:56:25.793049 env[1312]: time="2024-07-02T00:56:25.793017449Z" level=info msg="RemoveContainer for \"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2\" returns successfully" Jul 2 00:56:25.793331 kubelet[2172]: I0702 00:56:25.793313 2172 scope.go:117] "RemoveContainer" containerID="3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e" Jul 2 00:56:25.794289 env[1312]: time="2024-07-02T00:56:25.794265775Z" level=info msg="RemoveContainer for \"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e\"" Jul 2 00:56:25.797500 env[1312]: time="2024-07-02T00:56:25.797473471Z" level=info msg="RemoveContainer for \"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e\" returns successfully" Jul 2 00:56:25.797760 kubelet[2172]: I0702 00:56:25.797732 2172 scope.go:117] "RemoveContainer" containerID="a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90" Jul 2 00:56:25.798760 env[1312]: time="2024-07-02T00:56:25.798722357Z" level=info msg="RemoveContainer for \"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90\"" Jul 2 00:56:25.801150 env[1312]: time="2024-07-02T00:56:25.801118289Z" level=info msg="RemoveContainer for \"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90\" returns successfully" Jul 2 00:56:25.801399 kubelet[2172]: I0702 00:56:25.801379 2172 scope.go:117] "RemoveContainer" containerID="602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9" Jul 2 00:56:25.801662 env[1312]: time="2024-07-02T00:56:25.801584891Z" level=error msg="ContainerStatus for \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\": not found" Jul 2 00:56:25.801867 kubelet[2172]: E0702 00:56:25.801841 2172 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\": not found" containerID="602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9" Jul 2 00:56:25.801927 kubelet[2172]: I0702 00:56:25.801879 2172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9"} err="failed to get container status \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\": rpc error: code = NotFound desc = an error occurred when try to find container \"602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9\": not found" Jul 2 00:56:25.801927 kubelet[2172]: I0702 00:56:25.801891 2172 scope.go:117] "RemoveContainer" containerID="60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514" Jul 2 00:56:25.802079 env[1312]: time="2024-07-02T00:56:25.802029413Z" level=error msg="ContainerStatus for \"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514\": not found" Jul 2 00:56:25.802222 kubelet[2172]: E0702 00:56:25.802205 2172 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514\": not found" containerID="60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514" Jul 2 00:56:25.802312 kubelet[2172]: I0702 00:56:25.802299 2172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514"} err="failed to get container status \"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514\": rpc error: code = NotFound desc = an error occurred when try to find container \"60444cf1a4c5a2c9c430d08357e7b00420de1b81c48e13a05550e9a726ac1514\": not found" Jul 2 00:56:25.802375 kubelet[2172]: I0702 00:56:25.802365 2172 scope.go:117] "RemoveContainer" containerID="2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2" Jul 2 00:56:25.802663 env[1312]: time="2024-07-02T00:56:25.802617696Z" level=error msg="ContainerStatus for \"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2\": not found" Jul 2 00:56:25.802888 kubelet[2172]: E0702 00:56:25.802876 2172 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2\": not found" containerID="2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2" Jul 2 00:56:25.802950 kubelet[2172]: I0702 00:56:25.802900 2172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2"} err="failed to get container status \"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"2707d3b7ed26741e23469883c056da33583bfa47864c7335008b6a2b5b8d73f2\": not found" Jul 2 00:56:25.802950 kubelet[2172]: I0702 00:56:25.802911 2172 scope.go:117] "RemoveContainer" containerID="3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e" Jul 2 00:56:25.803154 env[1312]: time="2024-07-02T00:56:25.803110818Z" level=error msg="ContainerStatus for \"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e\": not found" Jul 2 00:56:25.803346 kubelet[2172]: E0702 00:56:25.803328 2172 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e\": not found" containerID="3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e" Jul 2 00:56:25.803401 kubelet[2172]: I0702 00:56:25.803356 2172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e"} err="failed to get container status \"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3319869ed29fb15655d05c3c5b8a2afbbfc59e9a67943b562596f3122642fe9e\": not found" Jul 2 00:56:25.803401 kubelet[2172]: I0702 00:56:25.803365 2172 scope.go:117] "RemoveContainer" containerID="a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90" Jul 2 00:56:25.803573 env[1312]: time="2024-07-02T00:56:25.803518660Z" level=error msg="ContainerStatus for \"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90\": not found" Jul 2 00:56:25.803698 kubelet[2172]: E0702 00:56:25.803683 2172 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90\": not found" containerID="a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90" Jul 2 00:56:25.803760 kubelet[2172]: I0702 00:56:25.803705 2172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90"} err="failed to get container status \"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90\": rpc error: code = NotFound desc = an error occurred when try to find container \"a956ae5b91c61773575fcc43fb9009ea36f7fa290957fbada090569addcc7f90\": not found" Jul 2 00:56:25.826979 kubelet[2172]: I0702 00:56:25.826899 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/248e6805-4f57-4243-b0c5-d33100cc81c6-cilium-config-path\") pod \"248e6805-4f57-4243-b0c5-d33100cc81c6\" (UID: \"248e6805-4f57-4243-b0c5-d33100cc81c6\") " Jul 2 00:56:25.827160 kubelet[2172]: I0702 00:56:25.827103 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/13b6bc0e-2a93-4e07-8196-361dd52f1d82-clustermesh-secrets\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.827249 kubelet[2172]: I0702 00:56:25.827237 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-bpf-maps\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.827753 kubelet[2172]: I0702 00:56:25.827513 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cni-path\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829254 kubelet[2172]: I0702 00:56:25.829197 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:25.829380 kubelet[2172]: I0702 00:56:25.829198 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cni-path" (OuterVolumeSpecName: "cni-path") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:25.829380 kubelet[2172]: I0702 00:56:25.829239 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-cgroup\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829380 kubelet[2172]: I0702 00:56:25.829318 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw8vs\" (UniqueName: \"kubernetes.io/projected/248e6805-4f57-4243-b0c5-d33100cc81c6-kube-api-access-zw8vs\") pod \"248e6805-4f57-4243-b0c5-d33100cc81c6\" (UID: \"248e6805-4f57-4243-b0c5-d33100cc81c6\") " Jul 2 00:56:25.829380 kubelet[2172]: I0702 00:56:25.829338 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-etc-cni-netd\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829380 kubelet[2172]: I0702 00:56:25.829362 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-xtables-lock\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829501 kubelet[2172]: I0702 00:56:25.829391 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rffbg\" (UniqueName: \"kubernetes.io/projected/13b6bc0e-2a93-4e07-8196-361dd52f1d82-kube-api-access-rffbg\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829501 kubelet[2172]: I0702 00:56:25.829434 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-config-path\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829501 kubelet[2172]: I0702 00:56:25.829450 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-lib-modules\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829501 kubelet[2172]: I0702 00:56:25.829474 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-run\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829501 kubelet[2172]: I0702 00:56:25.829490 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-hostproc\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829631 kubelet[2172]: I0702 00:56:25.829507 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-host-proc-sys-kernel\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829631 kubelet[2172]: I0702 00:56:25.829548 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/13b6bc0e-2a93-4e07-8196-361dd52f1d82-hubble-tls\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829631 kubelet[2172]: I0702 00:56:25.829567 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-host-proc-sys-net\") pod \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\" (UID: \"13b6bc0e-2a93-4e07-8196-361dd52f1d82\") " Jul 2 00:56:25.829631 kubelet[2172]: I0702 00:56:25.829600 2172 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.829631 kubelet[2172]: I0702 00:56:25.829610 2172 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.829736 kubelet[2172]: I0702 00:56:25.829640 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:25.829736 kubelet[2172]: I0702 00:56:25.829657 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:25.829736 kubelet[2172]: I0702 00:56:25.829681 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:25.829736 kubelet[2172]: I0702 00:56:25.829705 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-hostproc" (OuterVolumeSpecName: "hostproc") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:25.829736 kubelet[2172]: I0702 00:56:25.829721 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:25.829910 kubelet[2172]: I0702 00:56:25.829889 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:25.829991 kubelet[2172]: I0702 00:56:25.829978 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:25.831002 kubelet[2172]: I0702 00:56:25.830965 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:25.831132 kubelet[2172]: I0702 00:56:25.831102 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248e6805-4f57-4243-b0c5-d33100cc81c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "248e6805-4f57-4243-b0c5-d33100cc81c6" (UID: "248e6805-4f57-4243-b0c5-d33100cc81c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:56:25.832329 kubelet[2172]: I0702 00:56:25.832293 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:56:25.833368 kubelet[2172]: I0702 00:56:25.833343 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b6bc0e-2a93-4e07-8196-361dd52f1d82-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:56:25.833702 kubelet[2172]: I0702 00:56:25.833677 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/248e6805-4f57-4243-b0c5-d33100cc81c6-kube-api-access-zw8vs" (OuterVolumeSpecName: "kube-api-access-zw8vs") pod "248e6805-4f57-4243-b0c5-d33100cc81c6" (UID: "248e6805-4f57-4243-b0c5-d33100cc81c6"). InnerVolumeSpecName "kube-api-access-zw8vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:56:25.833808 kubelet[2172]: I0702 00:56:25.833779 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b6bc0e-2a93-4e07-8196-361dd52f1d82-kube-api-access-rffbg" (OuterVolumeSpecName: "kube-api-access-rffbg") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "kube-api-access-rffbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:56:25.834409 kubelet[2172]: I0702 00:56:25.834385 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b6bc0e-2a93-4e07-8196-361dd52f1d82-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "13b6bc0e-2a93-4e07-8196-361dd52f1d82" (UID: "13b6bc0e-2a93-4e07-8196-361dd52f1d82"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:56:25.930014 kubelet[2172]: I0702 00:56:25.929974 2172 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/248e6805-4f57-4243-b0c5-d33100cc81c6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930014 kubelet[2172]: I0702 00:56:25.930011 2172 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/13b6bc0e-2a93-4e07-8196-361dd52f1d82-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930014 kubelet[2172]: I0702 00:56:25.930022 2172 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930204 kubelet[2172]: I0702 00:56:25.930032 2172 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zw8vs\" (UniqueName: \"kubernetes.io/projected/248e6805-4f57-4243-b0c5-d33100cc81c6-kube-api-access-zw8vs\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930204 kubelet[2172]: I0702 00:56:25.930042 2172 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930204 kubelet[2172]: I0702 00:56:25.930051 2172 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930204 kubelet[2172]: I0702 00:56:25.930060 2172 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rffbg\" (UniqueName: \"kubernetes.io/projected/13b6bc0e-2a93-4e07-8196-361dd52f1d82-kube-api-access-rffbg\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930204 kubelet[2172]: I0702 00:56:25.930070 2172 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930204 kubelet[2172]: I0702 00:56:25.930078 2172 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930204 kubelet[2172]: I0702 00:56:25.930087 2172 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930204 kubelet[2172]: I0702 00:56:25.930097 2172 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930424 kubelet[2172]: I0702 00:56:25.930107 2172 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930424 kubelet[2172]: I0702 00:56:25.930115 2172 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/13b6bc0e-2a93-4e07-8196-361dd52f1d82-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:25.930424 kubelet[2172]: I0702 00:56:25.930125 2172 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/13b6bc0e-2a93-4e07-8196-361dd52f1d82-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:26.583733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-602e3fe4b0d0700c62c37a32b7fbd7ef0d2b15083d7d1434844e602f1db58fa9-rootfs.mount: Deactivated successfully. Jul 2 00:56:26.583883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc48c42dbd815e5acc768016f0a1fabcd5ff1d6c4c27b419e27ec79231156996-rootfs.mount: Deactivated successfully. Jul 2 00:56:26.583961 systemd[1]: var-lib-kubelet-pods-248e6805\x2d4f57\x2d4243\x2db0c5\x2dd33100cc81c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzw8vs.mount: Deactivated successfully. Jul 2 00:56:26.584039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6-rootfs.mount: Deactivated successfully. Jul 2 00:56:26.584110 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fa52c7fef1e1180f59e5b882b38c0254ae86fbe6d01f5144d545f4b031d81f6-shm.mount: Deactivated successfully. Jul 2 00:56:26.584187 systemd[1]: var-lib-kubelet-pods-13b6bc0e\x2d2a93\x2d4e07\x2d8196\x2d361dd52f1d82-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drffbg.mount: Deactivated successfully. Jul 2 00:56:26.584259 systemd[1]: var-lib-kubelet-pods-13b6bc0e\x2d2a93\x2d4e07\x2d8196\x2d361dd52f1d82-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:56:26.584334 systemd[1]: var-lib-kubelet-pods-13b6bc0e\x2d2a93\x2d4e07\x2d8196\x2d361dd52f1d82-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:56:26.670163 kubelet[2172]: E0702 00:56:26.670121 2172 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:56:27.539261 sshd[3757]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:27.542724 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:50602.service. Jul 2 00:56:27.543222 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:50590.service: Deactivated successfully. Jul 2 00:56:27.544196 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:56:27.544421 systemd-logind[1300]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:56:27.546728 systemd-logind[1300]: Removed session 21. Jul 2 00:56:27.587652 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 50602 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:27.589554 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:27.594505 systemd[1]: Started session-22.scope. Jul 2 00:56:27.594890 systemd-logind[1300]: New session 22 of user core. Jul 2 00:56:27.597900 kubelet[2172]: I0702 00:56:27.597873 2172 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="13b6bc0e-2a93-4e07-8196-361dd52f1d82" path="/var/lib/kubelet/pods/13b6bc0e-2a93-4e07-8196-361dd52f1d82/volumes" Jul 2 00:56:27.598735 kubelet[2172]: I0702 00:56:27.598718 2172 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="248e6805-4f57-4243-b0c5-d33100cc81c6" path="/var/lib/kubelet/pods/248e6805-4f57-4243-b0c5-d33100cc81c6/volumes" Jul 2 00:56:28.735020 sshd[3927]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:28.736204 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:50618.service. Jul 2 00:56:28.754137 kubelet[2172]: I0702 00:56:28.744112 2172 topology_manager.go:215] "Topology Admit Handler" podUID="3e331e13-610c-4f1c-b668-231031573902" podNamespace="kube-system" podName="cilium-zpmb2" Jul 2 00:56:28.754137 kubelet[2172]: E0702 00:56:28.744177 2172 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13b6bc0e-2a93-4e07-8196-361dd52f1d82" containerName="mount-cgroup" Jul 2 00:56:28.754137 kubelet[2172]: E0702 00:56:28.744187 2172 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="248e6805-4f57-4243-b0c5-d33100cc81c6" containerName="cilium-operator" Jul 2 00:56:28.754137 kubelet[2172]: E0702 00:56:28.744197 2172 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13b6bc0e-2a93-4e07-8196-361dd52f1d82" containerName="cilium-agent" Jul 2 00:56:28.754137 kubelet[2172]: E0702 00:56:28.744217 2172 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13b6bc0e-2a93-4e07-8196-361dd52f1d82" containerName="apply-sysctl-overwrites" Jul 2 00:56:28.754137 kubelet[2172]: E0702 00:56:28.744224 2172 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13b6bc0e-2a93-4e07-8196-361dd52f1d82" containerName="mount-bpf-fs" Jul 2 00:56:28.754137 kubelet[2172]: E0702 00:56:28.744232 2172 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13b6bc0e-2a93-4e07-8196-361dd52f1d82" containerName="clean-cilium-state" Jul 2 00:56:28.754137 kubelet[2172]: I0702 00:56:28.744255 2172 memory_manager.go:346] "RemoveStaleState removing state" podUID="248e6805-4f57-4243-b0c5-d33100cc81c6" containerName="cilium-operator" Jul 2 00:56:28.754137 kubelet[2172]: I0702 00:56:28.744261 2172 memory_manager.go:346] "RemoveStaleState removing state" podUID="13b6bc0e-2a93-4e07-8196-361dd52f1d82" containerName="cilium-agent" Jul 2 00:56:28.747219 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:50602.service: Deactivated successfully. Jul 2 00:56:28.748194 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:56:28.762611 systemd-logind[1300]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:56:28.763750 systemd-logind[1300]: Removed session 22. Jul 2 00:56:28.785740 sshd[3940]: Accepted publickey for core from 10.0.0.1 port 50618 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:28.787151 sshd[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:28.790641 systemd-logind[1300]: New session 23 of user core. Jul 2 00:56:28.791064 systemd[1]: Started session-23.scope. Jul 2 00:56:28.849034 kubelet[2172]: I0702 00:56:28.849003 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e331e13-610c-4f1c-b668-231031573902-cilium-config-path\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849176 kubelet[2172]: I0702 00:56:28.849165 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsbr9\" (UniqueName: \"kubernetes.io/projected/3e331e13-610c-4f1c-b668-231031573902-kube-api-access-gsbr9\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849258 kubelet[2172]: I0702 00:56:28.849248 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cilium-run\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849362 kubelet[2172]: I0702 00:56:28.849351 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-hostproc\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849503 kubelet[2172]: I0702 00:56:28.849453 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cilium-cgroup\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849578 kubelet[2172]: I0702 00:56:28.849510 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3e331e13-610c-4f1c-b668-231031573902-cilium-ipsec-secrets\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849578 kubelet[2172]: I0702 00:56:28.849550 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-host-proc-sys-kernel\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849578 kubelet[2172]: I0702 00:56:28.849571 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-etc-cni-netd\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849659 kubelet[2172]: I0702 00:56:28.849590 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e331e13-610c-4f1c-b668-231031573902-hubble-tls\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849659 kubelet[2172]: I0702 00:56:28.849618 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-host-proc-sys-net\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849659 kubelet[2172]: I0702 00:56:28.849638 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-lib-modules\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849659 kubelet[2172]: I0702 00:56:28.849659 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-bpf-maps\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849745 kubelet[2172]: I0702 00:56:28.849679 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e331e13-610c-4f1c-b668-231031573902-clustermesh-secrets\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849745 kubelet[2172]: I0702 00:56:28.849704 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cni-path\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.849745 kubelet[2172]: I0702 00:56:28.849722 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-xtables-lock\") pod \"cilium-zpmb2\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " pod="kube-system/cilium-zpmb2" Jul 2 00:56:28.912197 kubelet[2172]: E0702 00:56:28.911644 2172 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-gsbr9 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-zpmb2" podUID="3e331e13-610c-4f1c-b668-231031573902" Jul 2 00:56:28.915307 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:50626.service. Jul 2 00:56:28.916353 sshd[3940]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:28.919981 systemd-logind[1300]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:56:28.920265 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:50618.service: Deactivated successfully. Jul 2 00:56:28.921125 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:56:28.921634 systemd-logind[1300]: Removed session 23. Jul 2 00:56:28.958260 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 50626 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:56:28.959462 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:56:28.973072 systemd[1]: Started session-24.scope. Jul 2 00:56:28.973240 systemd-logind[1300]: New session 24 of user core. Jul 2 00:56:29.855661 kubelet[2172]: I0702 00:56:29.855616 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e331e13-610c-4f1c-b668-231031573902-clustermesh-secrets\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.855661 kubelet[2172]: I0702 00:56:29.855664 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e331e13-610c-4f1c-b668-231031573902-cilium-config-path\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856057 kubelet[2172]: I0702 00:56:29.855686 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsbr9\" (UniqueName: \"kubernetes.io/projected/3e331e13-610c-4f1c-b668-231031573902-kube-api-access-gsbr9\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856057 kubelet[2172]: I0702 00:56:29.855704 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cilium-cgroup\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856057 kubelet[2172]: I0702 00:56:29.855724 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3e331e13-610c-4f1c-b668-231031573902-cilium-ipsec-secrets\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856057 kubelet[2172]: I0702 00:56:29.855744 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-etc-cni-netd\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856057 kubelet[2172]: I0702 00:56:29.855769 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-lib-modules\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856057 kubelet[2172]: I0702 00:56:29.855792 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cilium-run\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856190 kubelet[2172]: I0702 00:56:29.855808 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-hostproc\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856190 kubelet[2172]: I0702 00:56:29.855826 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-host-proc-sys-kernel\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856190 kubelet[2172]: I0702 00:56:29.855848 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e331e13-610c-4f1c-b668-231031573902-hubble-tls\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856190 kubelet[2172]: I0702 00:56:29.855866 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-host-proc-sys-net\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856190 kubelet[2172]: I0702 00:56:29.855885 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-bpf-maps\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856190 kubelet[2172]: I0702 00:56:29.855903 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-xtables-lock\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856319 kubelet[2172]: I0702 00:56:29.855921 2172 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cni-path\") pod \"3e331e13-610c-4f1c-b668-231031573902\" (UID: \"3e331e13-610c-4f1c-b668-231031573902\") " Jul 2 00:56:29.856319 kubelet[2172]: I0702 00:56:29.855989 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cni-path" (OuterVolumeSpecName: "cni-path") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:29.856319 kubelet[2172]: I0702 00:56:29.856277 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:29.856554 kubelet[2172]: I0702 00:56:29.856440 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:29.856554 kubelet[2172]: I0702 00:56:29.856481 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:29.856554 kubelet[2172]: I0702 00:56:29.856502 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:29.858569 kubelet[2172]: I0702 00:56:29.856706 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-hostproc" (OuterVolumeSpecName: "hostproc") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:29.858569 kubelet[2172]: I0702 00:56:29.856729 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:29.858569 kubelet[2172]: I0702 00:56:29.857993 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e331e13-610c-4f1c-b668-231031573902-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:56:29.858569 kubelet[2172]: I0702 00:56:29.858035 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:29.858569 kubelet[2172]: I0702 00:56:29.858054 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:29.859050 kubelet[2172]: I0702 00:56:29.859014 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e331e13-610c-4f1c-b668-231031573902-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:56:29.859110 kubelet[2172]: I0702 00:56:29.859064 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:56:29.859357 kubelet[2172]: I0702 00:56:29.859336 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e331e13-610c-4f1c-b668-231031573902-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:56:29.859600 kubelet[2172]: I0702 00:56:29.859572 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e331e13-610c-4f1c-b668-231031573902-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:56:29.860100 systemd[1]: var-lib-kubelet-pods-3e331e13\x2d610c\x2d4f1c\x2db668\x2d231031573902-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 00:56:29.860238 systemd[1]: var-lib-kubelet-pods-3e331e13\x2d610c\x2d4f1c\x2db668\x2d231031573902-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:56:29.860320 systemd[1]: var-lib-kubelet-pods-3e331e13\x2d610c\x2d4f1c\x2db668\x2d231031573902-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:56:29.862476 kubelet[2172]: I0702 00:56:29.862426 2172 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e331e13-610c-4f1c-b668-231031573902-kube-api-access-gsbr9" (OuterVolumeSpecName: "kube-api-access-gsbr9") pod "3e331e13-610c-4f1c-b668-231031573902" (UID: "3e331e13-610c-4f1c-b668-231031573902"). InnerVolumeSpecName "kube-api-access-gsbr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:56:29.954715 systemd[1]: var-lib-kubelet-pods-3e331e13\x2d610c\x2d4f1c\x2db668\x2d231031573902-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgsbr9.mount: Deactivated successfully. Jul 2 00:56:29.956801 kubelet[2172]: I0702 00:56:29.956766 2172 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e331e13-610c-4f1c-b668-231031573902-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.956801 kubelet[2172]: I0702 00:56:29.956801 2172 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gsbr9\" (UniqueName: \"kubernetes.io/projected/3e331e13-610c-4f1c-b668-231031573902-kube-api-access-gsbr9\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.956904 kubelet[2172]: I0702 00:56:29.956813 2172 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.956904 kubelet[2172]: I0702 00:56:29.956824 2172 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e331e13-610c-4f1c-b668-231031573902-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.956904 kubelet[2172]: I0702 00:56:29.956834 2172 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3e331e13-610c-4f1c-b668-231031573902-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.956904 kubelet[2172]: I0702 00:56:29.956844 2172 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.956904 kubelet[2172]: I0702 00:56:29.956852 2172 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.956904 kubelet[2172]: I0702 00:56:29.956862 2172 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.956904 kubelet[2172]: I0702 00:56:29.956874 2172 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.956904 kubelet[2172]: I0702 00:56:29.956883 2172 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.957070 kubelet[2172]: I0702 00:56:29.956892 2172 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e331e13-610c-4f1c-b668-231031573902-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.957070 kubelet[2172]: I0702 00:56:29.956902 2172 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.957070 kubelet[2172]: I0702 00:56:29.956910 2172 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.957070 kubelet[2172]: I0702 00:56:29.956919 2172 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:29.957070 kubelet[2172]: I0702 00:56:29.956928 2172 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e331e13-610c-4f1c-b668-231031573902-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:56:30.824861 kubelet[2172]: I0702 00:56:30.824821 2172 topology_manager.go:215] "Topology Admit Handler" podUID="0f2f60f1-93f4-47f4-bab0-19fffca56085" podNamespace="kube-system" podName="cilium-wjxll" Jul 2 00:56:30.830682 kubelet[2172]: W0702 00:56:30.830642 2172 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:56:30.830682 kubelet[2172]: E0702 00:56:30.830686 2172 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:56:30.830931 kubelet[2172]: W0702 00:56:30.830781 2172 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:56:30.830931 kubelet[2172]: E0702 00:56:30.830797 2172 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:56:30.830931 kubelet[2172]: W0702 00:56:30.830853 2172 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:56:30.830931 kubelet[2172]: E0702 00:56:30.830865 2172 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:56:30.832094 kubelet[2172]: W0702 00:56:30.832076 2172 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:56:30.832207 kubelet[2172]: E0702 00:56:30.832195 2172 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:56:30.861510 kubelet[2172]: I0702 00:56:30.861473 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f2f60f1-93f4-47f4-bab0-19fffca56085-cilium-run\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.861510 kubelet[2172]: I0702 00:56:30.861514 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f2f60f1-93f4-47f4-bab0-19fffca56085-etc-cni-netd\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.861882 kubelet[2172]: I0702 00:56:30.861547 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f2f60f1-93f4-47f4-bab0-19fffca56085-hostproc\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.861882 kubelet[2172]: I0702 00:56:30.861567 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f2f60f1-93f4-47f4-bab0-19fffca56085-cilium-cgroup\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.861882 kubelet[2172]: I0702 00:56:30.861585 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f2f60f1-93f4-47f4-bab0-19fffca56085-hubble-tls\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.861882 kubelet[2172]: I0702 00:56:30.861604 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzwd9\" (UniqueName: \"kubernetes.io/projected/0f2f60f1-93f4-47f4-bab0-19fffca56085-kube-api-access-kzwd9\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.861882 kubelet[2172]: I0702 00:56:30.861623 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f2f60f1-93f4-47f4-bab0-19fffca56085-lib-modules\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.861882 kubelet[2172]: I0702 00:56:30.861642 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0f2f60f1-93f4-47f4-bab0-19fffca56085-cilium-ipsec-secrets\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.862012 kubelet[2172]: I0702 00:56:30.861660 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f2f60f1-93f4-47f4-bab0-19fffca56085-host-proc-sys-kernel\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.862012 kubelet[2172]: I0702 00:56:30.861678 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f2f60f1-93f4-47f4-bab0-19fffca56085-cilium-config-path\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.862012 kubelet[2172]: I0702 00:56:30.861695 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f2f60f1-93f4-47f4-bab0-19fffca56085-cni-path\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.862012 kubelet[2172]: I0702 00:56:30.861717 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f2f60f1-93f4-47f4-bab0-19fffca56085-host-proc-sys-net\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.862012 kubelet[2172]: I0702 00:56:30.861736 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f2f60f1-93f4-47f4-bab0-19fffca56085-bpf-maps\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.862012 kubelet[2172]: I0702 00:56:30.861755 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f2f60f1-93f4-47f4-bab0-19fffca56085-xtables-lock\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:30.862134 kubelet[2172]: I0702 00:56:30.861784 2172 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f2f60f1-93f4-47f4-bab0-19fffca56085-clustermesh-secrets\") pod \"cilium-wjxll\" (UID: \"0f2f60f1-93f4-47f4-bab0-19fffca56085\") " pod="kube-system/cilium-wjxll" Jul 2 00:56:31.598100 kubelet[2172]: I0702 00:56:31.598058 2172 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3e331e13-610c-4f1c-b668-231031573902" path="/var/lib/kubelet/pods/3e331e13-610c-4f1c-b668-231031573902/volumes" Jul 2 00:56:31.670899 kubelet[2172]: E0702 00:56:31.670868 2172 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:56:31.963422 kubelet[2172]: E0702 00:56:31.963279 2172 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:56:31.963422 kubelet[2172]: E0702 00:56:31.963366 2172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f2f60f1-93f4-47f4-bab0-19fffca56085-cilium-config-path podName:0f2f60f1-93f4-47f4-bab0-19fffca56085 nodeName:}" failed. No retries permitted until 2024-07-02 00:56:32.463345429 +0000 UTC m=+80.986238538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/0f2f60f1-93f4-47f4-bab0-19fffca56085-cilium-config-path") pod "cilium-wjxll" (UID: "0f2f60f1-93f4-47f4-bab0-19fffca56085") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:56:31.965899 kubelet[2172]: E0702 00:56:31.965868 2172 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 00:56:31.966071 kubelet[2172]: E0702 00:56:31.966000 2172 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-wjxll: failed to sync secret cache: timed out waiting for the condition Jul 2 00:56:31.966160 kubelet[2172]: E0702 00:56:31.965907 2172 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 2 00:56:31.966262 kubelet[2172]: E0702 00:56:31.966247 2172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f2f60f1-93f4-47f4-bab0-19fffca56085-hubble-tls podName:0f2f60f1-93f4-47f4-bab0-19fffca56085 nodeName:}" failed. No retries permitted until 2024-07-02 00:56:32.46622712 +0000 UTC m=+80.989120229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/0f2f60f1-93f4-47f4-bab0-19fffca56085-hubble-tls") pod "cilium-wjxll" (UID: "0f2f60f1-93f4-47f4-bab0-19fffca56085") : failed to sync secret cache: timed out waiting for the condition Jul 2 00:56:31.966360 kubelet[2172]: E0702 00:56:31.966348 2172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f2f60f1-93f4-47f4-bab0-19fffca56085-cilium-ipsec-secrets podName:0f2f60f1-93f4-47f4-bab0-19fffca56085 nodeName:}" failed. No retries permitted until 2024-07-02 00:56:32.466335481 +0000 UTC m=+80.989228590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/0f2f60f1-93f4-47f4-bab0-19fffca56085-cilium-ipsec-secrets") pod "cilium-wjxll" (UID: "0f2f60f1-93f4-47f4-bab0-19fffca56085") : failed to sync secret cache: timed out waiting for the condition Jul 2 00:56:32.628893 kubelet[2172]: E0702 00:56:32.628860 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:32.629408 env[1312]: time="2024-07-02T00:56:32.629359100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wjxll,Uid:0f2f60f1-93f4-47f4-bab0-19fffca56085,Namespace:kube-system,Attempt:0,}" Jul 2 00:56:32.643138 env[1312]: time="2024-07-02T00:56:32.643071955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:56:32.643235 env[1312]: time="2024-07-02T00:56:32.643117155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:56:32.643235 env[1312]: time="2024-07-02T00:56:32.643127715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:56:32.643465 env[1312]: time="2024-07-02T00:56:32.643433156Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4 pid=3988 runtime=io.containerd.runc.v2 Jul 2 00:56:32.685462 env[1312]: time="2024-07-02T00:56:32.685419604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wjxll,Uid:0f2f60f1-93f4-47f4-bab0-19fffca56085,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4\"" Jul 2 00:56:32.686199 kubelet[2172]: E0702 00:56:32.686178 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:32.687865 env[1312]: time="2024-07-02T00:56:32.687828374Z" level=info msg="CreateContainer within sandbox \"bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:56:32.696069 env[1312]: time="2024-07-02T00:56:32.696020327Z" level=info msg="CreateContainer within sandbox \"bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"967e2d67a56ac7d0558dd2865ddb5c18debf6ec5991cf2eea4af628e6546a6e7\"" Jul 2 00:56:32.696618 env[1312]: time="2024-07-02T00:56:32.696588449Z" level=info msg="StartContainer for \"967e2d67a56ac7d0558dd2865ddb5c18debf6ec5991cf2eea4af628e6546a6e7\"" Jul 2 00:56:32.741846 env[1312]: time="2024-07-02T00:56:32.741801190Z" level=info msg="StartContainer for \"967e2d67a56ac7d0558dd2865ddb5c18debf6ec5991cf2eea4af628e6546a6e7\" returns successfully" Jul 2 00:56:32.782117 env[1312]: time="2024-07-02T00:56:32.782072751Z" level=info msg="shim disconnected" id=967e2d67a56ac7d0558dd2865ddb5c18debf6ec5991cf2eea4af628e6546a6e7 Jul 2 00:56:32.782333 env[1312]: time="2024-07-02T00:56:32.782314432Z" level=warning msg="cleaning up after shim disconnected" id=967e2d67a56ac7d0558dd2865ddb5c18debf6ec5991cf2eea4af628e6546a6e7 namespace=k8s.io Jul 2 00:56:32.782394 env[1312]: time="2024-07-02T00:56:32.782381272Z" level=info msg="cleaning up dead shim" Jul 2 00:56:32.789444 env[1312]: time="2024-07-02T00:56:32.789383940Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:56:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4070 runtime=io.containerd.runc.v2\n" Jul 2 00:56:32.795384 kubelet[2172]: E0702 00:56:32.795342 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:32.799300 env[1312]: time="2024-07-02T00:56:32.799213980Z" level=info msg="CreateContainer within sandbox \"bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:56:32.810653 env[1312]: time="2024-07-02T00:56:32.810605905Z" level=info msg="CreateContainer within sandbox \"bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"11a9c6752520763211010b591d3459e4faf1faaf88bfa6dc8b054bf9481c7e43\"" Jul 2 00:56:32.812418 env[1312]: time="2024-07-02T00:56:32.812297912Z" level=info msg="StartContainer for \"11a9c6752520763211010b591d3459e4faf1faaf88bfa6dc8b054bf9481c7e43\"" Jul 2 00:56:32.857488 env[1312]: time="2024-07-02T00:56:32.856275008Z" level=info msg="StartContainer for \"11a9c6752520763211010b591d3459e4faf1faaf88bfa6dc8b054bf9481c7e43\" returns successfully" Jul 2 00:56:32.879671 env[1312]: time="2024-07-02T00:56:32.879553062Z" level=info msg="shim disconnected" id=11a9c6752520763211010b591d3459e4faf1faaf88bfa6dc8b054bf9481c7e43 Jul 2 00:56:32.879671 env[1312]: time="2024-07-02T00:56:32.879597942Z" level=warning msg="cleaning up after shim disconnected" id=11a9c6752520763211010b591d3459e4faf1faaf88bfa6dc8b054bf9481c7e43 namespace=k8s.io Jul 2 00:56:32.879671 env[1312]: time="2024-07-02T00:56:32.879606822Z" level=info msg="cleaning up dead shim" Jul 2 00:56:32.886494 env[1312]: time="2024-07-02T00:56:32.886460489Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:56:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4134 runtime=io.containerd.runc.v2\n" Jul 2 00:56:33.417518 kubelet[2172]: I0702 00:56:33.417492 2172 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:56:33Z","lastTransitionTime":"2024-07-02T00:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:56:33.799134 kubelet[2172]: E0702 00:56:33.799109 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:33.802074 env[1312]: time="2024-07-02T00:56:33.802027151Z" level=info msg="CreateContainer within sandbox \"bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:56:33.813050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123881112.mount: Deactivated successfully. Jul 2 00:56:33.815701 env[1312]: time="2024-07-02T00:56:33.815613484Z" level=info msg="CreateContainer within sandbox \"bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd85ac1c76c29fe85c64a0dc846cd77c0ded7d3b576b5e7267d2a11155fdd8f2\"" Jul 2 00:56:33.816278 env[1312]: time="2024-07-02T00:56:33.816235366Z" level=info msg="StartContainer for \"fd85ac1c76c29fe85c64a0dc846cd77c0ded7d3b576b5e7267d2a11155fdd8f2\"" Jul 2 00:56:33.871005 env[1312]: time="2024-07-02T00:56:33.870965739Z" level=info msg="StartContainer for \"fd85ac1c76c29fe85c64a0dc846cd77c0ded7d3b576b5e7267d2a11155fdd8f2\" returns successfully" Jul 2 00:56:33.891373 env[1312]: time="2024-07-02T00:56:33.891325099Z" level=info msg="shim disconnected" id=fd85ac1c76c29fe85c64a0dc846cd77c0ded7d3b576b5e7267d2a11155fdd8f2 Jul 2 00:56:33.891373 env[1312]: time="2024-07-02T00:56:33.891369739Z" level=warning msg="cleaning up after shim disconnected" id=fd85ac1c76c29fe85c64a0dc846cd77c0ded7d3b576b5e7267d2a11155fdd8f2 namespace=k8s.io Jul 2 00:56:33.891373 env[1312]: time="2024-07-02T00:56:33.891378819Z" level=info msg="cleaning up dead shim" Jul 2 00:56:33.898282 env[1312]: time="2024-07-02T00:56:33.898215606Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:56:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4191 runtime=io.containerd.runc.v2\n" Jul 2 00:56:34.478384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd85ac1c76c29fe85c64a0dc846cd77c0ded7d3b576b5e7267d2a11155fdd8f2-rootfs.mount: Deactivated successfully. Jul 2 00:56:34.802790 kubelet[2172]: E0702 00:56:34.802757 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:34.811546 env[1312]: time="2024-07-02T00:56:34.806976466Z" level=info msg="CreateContainer within sandbox \"bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:56:34.825895 env[1312]: time="2024-07-02T00:56:34.825846697Z" level=info msg="CreateContainer within sandbox \"bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0626ec8f8c0f4f51f02986ce76561629fa04b74ef4a6d5e43b4b34b064f25362\"" Jul 2 00:56:34.826716 env[1312]: time="2024-07-02T00:56:34.826683061Z" level=info msg="StartContainer for \"0626ec8f8c0f4f51f02986ce76561629fa04b74ef4a6d5e43b4b34b064f25362\"" Jul 2 00:56:34.886483 env[1312]: time="2024-07-02T00:56:34.886432327Z" level=info msg="StartContainer for \"0626ec8f8c0f4f51f02986ce76561629fa04b74ef4a6d5e43b4b34b064f25362\" returns successfully" Jul 2 00:56:34.905093 env[1312]: time="2024-07-02T00:56:34.905049758Z" level=info msg="shim disconnected" id=0626ec8f8c0f4f51f02986ce76561629fa04b74ef4a6d5e43b4b34b064f25362 Jul 2 00:56:34.905315 env[1312]: time="2024-07-02T00:56:34.905295959Z" level=warning msg="cleaning up after shim disconnected" id=0626ec8f8c0f4f51f02986ce76561629fa04b74ef4a6d5e43b4b34b064f25362 namespace=k8s.io Jul 2 00:56:34.905379 env[1312]: time="2024-07-02T00:56:34.905365639Z" level=info msg="cleaning up dead shim" Jul 2 00:56:34.913022 env[1312]: time="2024-07-02T00:56:34.912978388Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:56:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4246 runtime=io.containerd.runc.v2\n" Jul 2 00:56:35.478411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0626ec8f8c0f4f51f02986ce76561629fa04b74ef4a6d5e43b4b34b064f25362-rootfs.mount: Deactivated successfully. Jul 2 00:56:35.806452 kubelet[2172]: E0702 00:56:35.806397 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:35.813695 env[1312]: time="2024-07-02T00:56:35.813641646Z" level=info msg="CreateContainer within sandbox \"bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:56:35.824412 env[1312]: time="2024-07-02T00:56:35.824369526Z" level=info msg="CreateContainer within sandbox \"bcd8d711f192c71571aa44818b861ba5e677c97b4ff6c6d0b7e199399a9407f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4b69e22a032fd1aa03adfa4d00bbef61d8c9ac545a1dfc0379691a78cd5f17d1\"" Jul 2 00:56:35.825308 env[1312]: time="2024-07-02T00:56:35.825282250Z" level=info msg="StartContainer for \"4b69e22a032fd1aa03adfa4d00bbef61d8c9ac545a1dfc0379691a78cd5f17d1\"" Jul 2 00:56:35.883589 env[1312]: time="2024-07-02T00:56:35.881881899Z" level=info msg="StartContainer for \"4b69e22a032fd1aa03adfa4d00bbef61d8c9ac545a1dfc0379691a78cd5f17d1\" returns successfully" Jul 2 00:56:36.115572 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 2 00:56:36.811706 kubelet[2172]: E0702 00:56:36.811661 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:36.827141 kubelet[2172]: I0702 00:56:36.827100 2172 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wjxll" podStartSLOduration=6.827059513 podCreationTimestamp="2024-07-02 00:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:56:36.82622327 +0000 UTC m=+85.349116379" watchObservedRunningTime="2024-07-02 00:56:36.827059513 +0000 UTC m=+85.349952622" Jul 2 00:56:37.236107 systemd[1]: run-containerd-runc-k8s.io-4b69e22a032fd1aa03adfa4d00bbef61d8c9ac545a1dfc0379691a78cd5f17d1-runc.apyRW9.mount: Deactivated successfully. Jul 2 00:56:38.596140 kubelet[2172]: E0702 00:56:38.596100 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:38.630327 kubelet[2172]: E0702 00:56:38.630285 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:38.745137 systemd-networkd[1093]: lxc_health: Link UP Jul 2 00:56:38.755212 systemd-networkd[1093]: lxc_health: Gained carrier Jul 2 00:56:38.755563 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:56:39.416823 kubelet[2172]: E0702 00:56:39.416680 2172 upgradeaware.go:439] Error proxying data from backend to client: read tcp 127.0.0.1:52440->127.0.0.1:43221: read: connection reset by peer Jul 2 00:56:40.361674 systemd-networkd[1093]: lxc_health: Gained IPv6LL Jul 2 00:56:40.596397 kubelet[2172]: E0702 00:56:40.596364 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:40.631080 kubelet[2172]: E0702 00:56:40.630996 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:40.818301 kubelet[2172]: E0702 00:56:40.818261 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:44.596449 kubelet[2172]: E0702 00:56:44.596412 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:56:45.827377 sshd[3955]: pam_unix(sshd:session): session closed for user core Jul 2 00:56:45.829724 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:50626.service: Deactivated successfully. Jul 2 00:56:45.830911 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:56:45.830917 systemd-logind[1300]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:56:45.831802 systemd-logind[1300]: Removed session 24.