Jul 11 00:32:43.731000 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 00:32:43.731020 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu Jul 10 23:22:35 -00 2025 Jul 11 00:32:43.731027 kernel: efi: EFI v2.70 by EDK II Jul 11 00:32:43.731033 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 11 00:32:43.731038 kernel: random: crng init done Jul 11 00:32:43.731043 kernel: ACPI: Early table checksum verification disabled Jul 11 00:32:43.731050 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 11 00:32:43.731057 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:32:43.731062 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:32:43.731067 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:32:43.731072 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:32:43.731078 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:32:43.731083 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:32:43.731088 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:32:43.731096 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:32:43.731102 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:32:43.731108 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:32:43.731113 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 00:32:43.731119 kernel: NUMA: Failed to initialise from firmware Jul 11 00:32:43.731124 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:32:43.731130 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 11 00:32:43.731135 kernel: Zone ranges: Jul 11 00:32:43.731141 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:32:43.731147 kernel: DMA32 empty Jul 11 00:32:43.731153 kernel: Normal empty Jul 11 00:32:43.731158 kernel: Movable zone start for each node Jul 11 00:32:43.731164 kernel: Early memory node ranges Jul 11 00:32:43.731169 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 11 00:32:43.731175 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 11 00:32:43.731180 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 11 00:32:43.731186 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 11 00:32:43.731191 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 11 00:32:43.731196 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 11 00:32:43.731202 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 11 00:32:43.731208 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:32:43.731214 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 00:32:43.731220 kernel: psci: probing for conduit method from ACPI. Jul 11 00:32:43.731225 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 00:32:43.731231 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 00:32:43.731236 kernel: psci: Trusted OS migration not required Jul 11 00:32:43.731244 kernel: psci: SMC Calling Convention v1.1 Jul 11 00:32:43.731258 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 00:32:43.731267 kernel: ACPI: SRAT not present Jul 11 00:32:43.731274 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 11 00:32:43.731280 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 11 00:32:43.731286 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 00:32:43.731292 kernel: Detected PIPT I-cache on CPU0 Jul 11 00:32:43.731298 kernel: CPU features: detected: GIC system register CPU interface Jul 11 00:32:43.731304 kernel: CPU features: detected: Hardware dirty bit management Jul 11 00:32:43.731310 kernel: CPU features: detected: Spectre-v4 Jul 11 00:32:43.731316 kernel: CPU features: detected: Spectre-BHB Jul 11 00:32:43.731323 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 00:32:43.731329 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 00:32:43.731335 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 00:32:43.731341 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 00:32:43.731347 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 11 00:32:43.731353 kernel: Policy zone: DMA Jul 11 00:32:43.731360 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8fd3ef416118421b63f30b3d02e5d4feea39e34704e91050cdad11fae31df42c Jul 11 00:32:43.731366 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:32:43.731372 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:32:43.731378 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:32:43.731384 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:32:43.731391 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 11 00:32:43.731398 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:32:43.731404 kernel: trace event string verifier disabled Jul 11 00:32:43.731409 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:32:43.731416 kernel: rcu: RCU event tracing is enabled. Jul 11 00:32:43.731422 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:32:43.731428 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:32:43.731434 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:32:43.731440 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:32:43.731446 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:32:43.731453 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 00:32:43.731459 kernel: GICv3: 256 SPIs implemented Jul 11 00:32:43.731465 kernel: GICv3: 0 Extended SPIs implemented Jul 11 00:32:43.731471 kernel: GICv3: Distributor has no Range Selector support Jul 11 00:32:43.731481 kernel: Root IRQ handler: gic_handle_irq Jul 11 00:32:43.731487 kernel: GICv3: 16 PPIs implemented Jul 11 00:32:43.731493 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 00:32:43.731498 kernel: ACPI: SRAT not present Jul 11 00:32:43.731504 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 00:32:43.731510 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 11 00:32:43.731517 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 11 00:32:43.731523 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 11 00:32:43.731529 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 11 00:32:43.731536 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:32:43.731542 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 00:32:43.731548 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 00:32:43.731554 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 00:32:43.731560 kernel: arm-pv: using stolen time PV Jul 11 00:32:43.731567 kernel: Console: colour dummy device 80x25 Jul 11 00:32:43.731573 kernel: ACPI: Core revision 20210730 Jul 11 00:32:43.731579 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 00:32:43.731585 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:32:43.731591 kernel: LSM: Security Framework initializing Jul 11 00:32:43.731598 kernel: SELinux: Initializing. Jul 11 00:32:43.731604 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:32:43.731611 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:32:43.731617 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:32:43.731623 kernel: Platform MSI: ITS@0x8080000 domain created Jul 11 00:32:43.731636 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 11 00:32:43.731643 kernel: Remapping and enabling EFI services. Jul 11 00:32:43.731650 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:32:43.731656 kernel: Detected PIPT I-cache on CPU1 Jul 11 00:32:43.731663 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 00:32:43.731670 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 11 00:32:43.731676 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:32:43.731682 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 00:32:43.731688 kernel: Detected PIPT I-cache on CPU2 Jul 11 00:32:43.731695 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 00:32:43.731701 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 11 00:32:43.731707 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:32:43.731713 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 00:32:43.731720 kernel: Detected PIPT I-cache on CPU3 Jul 11 00:32:43.731727 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 00:32:43.731733 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 11 00:32:43.731739 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:32:43.731746 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 00:32:43.731756 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:32:43.731763 kernel: SMP: Total of 4 processors activated. Jul 11 00:32:43.731770 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 00:32:43.731776 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 00:32:43.731783 kernel: CPU features: detected: Common not Private translations Jul 11 00:32:43.731789 kernel: CPU features: detected: CRC32 instructions Jul 11 00:32:43.731796 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 00:32:43.731802 kernel: CPU features: detected: LSE atomic instructions Jul 11 00:32:43.731810 kernel: CPU features: detected: Privileged Access Never Jul 11 00:32:43.731816 kernel: CPU features: detected: RAS Extension Support Jul 11 00:32:43.731823 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 00:32:43.731829 kernel: CPU: All CPU(s) started at EL1 Jul 11 00:32:43.731836 kernel: alternatives: patching kernel code Jul 11 00:32:43.731843 kernel: devtmpfs: initialized Jul 11 00:32:43.731849 kernel: KASLR enabled Jul 11 00:32:43.731856 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:32:43.731862 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:32:43.731869 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:32:43.731875 kernel: SMBIOS 3.0.0 present. Jul 11 00:32:43.731882 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 11 00:32:43.731888 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:32:43.731895 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 00:32:43.731902 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 00:32:43.731909 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 00:32:43.731915 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:32:43.731922 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 Jul 11 00:32:43.731928 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:32:43.731935 kernel: cpuidle: using governor menu Jul 11 00:32:43.731942 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 00:32:43.731948 kernel: ASID allocator initialised with 32768 entries Jul 11 00:32:43.731954 kernel: ACPI: bus type PCI registered Jul 11 00:32:43.731962 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:32:43.731968 kernel: Serial: AMBA PL011 UART driver Jul 11 00:32:43.731975 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:32:43.731981 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 00:32:43.731988 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:32:43.731994 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 00:32:43.732001 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:32:43.732007 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 00:32:43.732014 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:32:43.732021 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:32:43.732028 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:32:43.732034 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 11 00:32:43.732040 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 11 00:32:43.732047 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 11 00:32:43.732053 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:32:43.732060 kernel: ACPI: Interpreter enabled Jul 11 00:32:43.732066 kernel: ACPI: Using GIC for interrupt routing Jul 11 00:32:43.732073 kernel: ACPI: MCFG table detected, 1 entries Jul 11 00:32:43.732081 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 00:32:43.732087 kernel: printk: console [ttyAMA0] enabled Jul 11 00:32:43.732094 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:32:43.732233 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:32:43.732308 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 00:32:43.732366 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 00:32:43.732421 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 00:32:43.732478 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 00:32:43.732487 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 00:32:43.732494 kernel: PCI host bridge to bus 0000:00 Jul 11 00:32:43.732584 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 00:32:43.732660 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 00:32:43.732713 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 00:32:43.732763 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:32:43.733337 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 11 00:32:43.733436 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:32:43.733496 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 11 00:32:43.733609 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 11 00:32:43.733718 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:32:43.733783 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:32:43.733838 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 11 00:32:43.733902 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 11 00:32:43.733954 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 00:32:43.734004 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 00:32:43.734054 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 00:32:43.734063 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 00:32:43.734070 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 00:32:43.734077 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 00:32:43.734085 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 00:32:43.734092 kernel: iommu: Default domain type: Translated Jul 11 00:32:43.734098 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 00:32:43.734105 kernel: vgaarb: loaded Jul 11 00:32:43.734111 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 11 00:32:43.734118 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 11 00:32:43.734125 kernel: PTP clock support registered Jul 11 00:32:43.734131 kernel: Registered efivars operations Jul 11 00:32:43.734138 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 00:32:43.734144 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:32:43.734152 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:32:43.734159 kernel: pnp: PnP ACPI init Jul 11 00:32:43.734227 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 00:32:43.734237 kernel: pnp: PnP ACPI: found 1 devices Jul 11 00:32:43.734244 kernel: NET: Registered PF_INET protocol family Jul 11 00:32:43.734258 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:32:43.734266 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:32:43.734273 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:32:43.734282 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:32:43.734288 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 11 00:32:43.734295 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:32:43.734301 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:32:43.734308 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:32:43.734314 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:32:43.734321 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:32:43.734328 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 11 00:32:43.734334 kernel: kvm [1]: HYP mode not available Jul 11 00:32:43.734342 kernel: Initialise system trusted keyrings Jul 11 00:32:43.734348 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:32:43.734355 kernel: Key type asymmetric registered Jul 11 00:32:43.734361 kernel: Asymmetric key parser 'x509' registered Jul 11 00:32:43.734368 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 11 00:32:43.734374 kernel: io scheduler mq-deadline registered Jul 11 00:32:43.734381 kernel: io scheduler kyber registered Jul 11 00:32:43.734387 kernel: io scheduler bfq registered Jul 11 00:32:43.734394 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 00:32:43.734402 kernel: ACPI: button: Power Button [PWRB] Jul 11 00:32:43.734409 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 00:32:43.734472 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 00:32:43.734481 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:32:43.734488 kernel: thunder_xcv, ver 1.0 Jul 11 00:32:43.734494 kernel: thunder_bgx, ver 1.0 Jul 11 00:32:43.734501 kernel: nicpf, ver 1.0 Jul 11 00:32:43.734507 kernel: nicvf, ver 1.0 Jul 11 00:32:43.734571 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 00:32:43.734627 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T00:32:43 UTC (1752193963) Jul 11 00:32:43.734646 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 00:32:43.734653 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:32:43.737359 kernel: Segment Routing with IPv6 Jul 11 00:32:43.737368 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:32:43.737375 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:32:43.737381 kernel: Key type dns_resolver registered Jul 11 00:32:43.737388 kernel: registered taskstats version 1 Jul 11 00:32:43.737403 kernel: Loading compiled-in X.509 certificates Jul 11 00:32:43.737411 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: e29f2f0310c2b60e0457f826e7476605fb3b6ab2' Jul 11 00:32:43.737418 kernel: Key type .fscrypt registered Jul 11 00:32:43.737424 kernel: Key type fscrypt-provisioning registered Jul 11 00:32:43.737431 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:32:43.737438 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:32:43.737449 kernel: ima: No architecture policies found Jul 11 00:32:43.737455 kernel: clk: Disabling unused clocks Jul 11 00:32:43.737462 kernel: Freeing unused kernel memory: 36416K Jul 11 00:32:43.737474 kernel: Run /init as init process Jul 11 00:32:43.737482 kernel: with arguments: Jul 11 00:32:43.737490 kernel: /init Jul 11 00:32:43.737498 kernel: with environment: Jul 11 00:32:43.737505 kernel: HOME=/ Jul 11 00:32:43.737512 kernel: TERM=linux Jul 11 00:32:43.737519 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:32:43.737528 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 11 00:32:43.737538 systemd[1]: Detected virtualization kvm. Jul 11 00:32:43.737545 systemd[1]: Detected architecture arm64. Jul 11 00:32:43.737553 systemd[1]: Running in initrd. Jul 11 00:32:43.737560 systemd[1]: No hostname configured, using default hostname. Jul 11 00:32:43.737567 systemd[1]: Hostname set to . Jul 11 00:32:43.737575 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:32:43.737582 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:32:43.737588 systemd[1]: Started systemd-ask-password-console.path. Jul 11 00:32:43.737597 systemd[1]: Reached target cryptsetup.target. Jul 11 00:32:43.737604 systemd[1]: Reached target paths.target. Jul 11 00:32:43.737611 systemd[1]: Reached target slices.target. Jul 11 00:32:43.737617 systemd[1]: Reached target swap.target. Jul 11 00:32:43.737625 systemd[1]: Reached target timers.target. Jul 11 00:32:43.737644 systemd[1]: Listening on iscsid.socket. Jul 11 00:32:43.737651 systemd[1]: Listening on iscsiuio.socket. Jul 11 00:32:43.737660 systemd[1]: Listening on systemd-journald-audit.socket. Jul 11 00:32:43.737667 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 11 00:32:43.737674 systemd[1]: Listening on systemd-journald.socket. Jul 11 00:32:43.737681 systemd[1]: Listening on systemd-networkd.socket. Jul 11 00:32:43.737688 systemd[1]: Listening on systemd-udevd-control.socket. Jul 11 00:32:43.737695 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 11 00:32:43.737702 systemd[1]: Reached target sockets.target. Jul 11 00:32:43.737709 systemd[1]: Starting kmod-static-nodes.service... Jul 11 00:32:43.737716 systemd[1]: Finished network-cleanup.service. Jul 11 00:32:43.737724 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:32:43.737731 systemd[1]: Starting systemd-journald.service... Jul 11 00:32:43.737738 systemd[1]: Starting systemd-modules-load.service... Jul 11 00:32:43.737745 systemd[1]: Starting systemd-resolved.service... Jul 11 00:32:43.737752 systemd[1]: Starting systemd-vconsole-setup.service... Jul 11 00:32:43.737759 systemd[1]: Finished kmod-static-nodes.service. Jul 11 00:32:43.737766 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:32:43.737773 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 11 00:32:43.737780 systemd[1]: Finished systemd-vconsole-setup.service. Jul 11 00:32:43.737789 kernel: audit: type=1130 audit(1752193963.730:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.737797 systemd[1]: Starting dracut-cmdline-ask.service... Jul 11 00:32:43.737804 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 11 00:32:43.737811 kernel: audit: type=1130 audit(1752193963.736:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.737821 systemd-journald[291]: Journal started Jul 11 00:32:43.737880 systemd-journald[291]: Runtime Journal (/run/log/journal/f88c0be94e2148798ca7a4eca58e191e) is 6.0M, max 48.7M, 42.6M free. Jul 11 00:32:43.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.724554 systemd-modules-load[292]: Inserted module 'overlay' Jul 11 00:32:43.739498 systemd[1]: Started systemd-journald.service. Jul 11 00:32:43.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.742859 kernel: audit: type=1130 audit(1752193963.740:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.745786 systemd-resolved[293]: Positive Trust Anchors: Jul 11 00:32:43.745799 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:32:43.745828 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 11 00:32:43.758726 kernel: audit: type=1130 audit(1752193963.754:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.750980 systemd-resolved[293]: Defaulting to hostname 'linux'. Jul 11 00:32:43.754619 systemd[1]: Started systemd-resolved.service. Jul 11 00:32:43.762717 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:32:43.762734 kernel: audit: type=1130 audit(1752193963.761:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.755551 systemd[1]: Reached target nss-lookup.target. Jul 11 00:32:43.766498 kernel: Bridge firewalling registered Jul 11 00:32:43.759626 systemd[1]: Finished dracut-cmdline-ask.service. Jul 11 00:32:43.762975 systemd[1]: Starting dracut-cmdline.service... Jul 11 00:32:43.764945 systemd-modules-load[292]: Inserted module 'br_netfilter' Jul 11 00:32:43.773301 dracut-cmdline[309]: dracut-dracut-053 Jul 11 00:32:43.775505 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8fd3ef416118421b63f30b3d02e5d4feea39e34704e91050cdad11fae31df42c Jul 11 00:32:43.780657 kernel: SCSI subsystem initialized Jul 11 00:32:43.788142 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:32:43.788179 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:32:43.788188 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 11 00:32:43.790218 systemd-modules-load[292]: Inserted module 'dm_multipath' Jul 11 00:32:43.790975 systemd[1]: Finished systemd-modules-load.service. Jul 11 00:32:43.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.792412 systemd[1]: Starting systemd-sysctl.service... Jul 11 00:32:43.795336 kernel: audit: type=1130 audit(1752193963.791:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.800744 systemd[1]: Finished systemd-sysctl.service. Jul 11 00:32:43.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.803641 kernel: audit: type=1130 audit(1752193963.800:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.839656 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:32:43.854651 kernel: iscsi: registered transport (tcp) Jul 11 00:32:43.871649 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:32:43.871667 kernel: QLogic iSCSI HBA Driver Jul 11 00:32:43.903405 systemd[1]: Finished dracut-cmdline.service. Jul 11 00:32:43.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.905010 systemd[1]: Starting dracut-pre-udev.service... Jul 11 00:32:43.907668 kernel: audit: type=1130 audit(1752193963.903:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:43.951672 kernel: raid6: neonx8 gen() 13690 MB/s Jul 11 00:32:43.968643 kernel: raid6: neonx8 xor() 10799 MB/s Jul 11 00:32:43.985651 kernel: raid6: neonx4 gen() 13466 MB/s Jul 11 00:32:44.002648 kernel: raid6: neonx4 xor() 11011 MB/s Jul 11 00:32:44.019648 kernel: raid6: neonx2 gen() 12914 MB/s Jul 11 00:32:44.036643 kernel: raid6: neonx2 xor() 10406 MB/s Jul 11 00:32:44.053648 kernel: raid6: neonx1 gen() 10510 MB/s Jul 11 00:32:44.070648 kernel: raid6: neonx1 xor() 8772 MB/s Jul 11 00:32:44.087651 kernel: raid6: int64x8 gen() 6269 MB/s Jul 11 00:32:44.104645 kernel: raid6: int64x8 xor() 3541 MB/s Jul 11 00:32:44.121657 kernel: raid6: int64x4 gen() 7198 MB/s Jul 11 00:32:44.138645 kernel: raid6: int64x4 xor() 3851 MB/s Jul 11 00:32:44.155654 kernel: raid6: int64x2 gen() 6145 MB/s Jul 11 00:32:44.172657 kernel: raid6: int64x2 xor() 3319 MB/s Jul 11 00:32:44.189648 kernel: raid6: int64x1 gen() 5034 MB/s Jul 11 00:32:44.206945 kernel: raid6: int64x1 xor() 2644 MB/s Jul 11 00:32:44.206992 kernel: raid6: using algorithm neonx8 gen() 13690 MB/s Jul 11 00:32:44.207002 kernel: raid6: .... xor() 10799 MB/s, rmw enabled Jul 11 00:32:44.207010 kernel: raid6: using neon recovery algorithm Jul 11 00:32:44.217715 kernel: xor: measuring software checksum speed Jul 11 00:32:44.217744 kernel: 8regs : 16746 MB/sec Jul 11 00:32:44.218763 kernel: 32regs : 20723 MB/sec Jul 11 00:32:44.218777 kernel: arm64_neon : 27132 MB/sec Jul 11 00:32:44.218785 kernel: xor: using function: arm64_neon (27132 MB/sec) Jul 11 00:32:44.274667 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 11 00:32:44.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:44.284773 systemd[1]: Finished dracut-pre-udev.service. Jul 11 00:32:44.288136 kernel: audit: type=1130 audit(1752193964.284:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:44.286000 audit: BPF prog-id=7 op=LOAD Jul 11 00:32:44.287000 audit: BPF prog-id=8 op=LOAD Jul 11 00:32:44.289184 systemd[1]: Starting systemd-udevd.service... Jul 11 00:32:44.307139 systemd-udevd[492]: Using default interface naming scheme 'v252'. Jul 11 00:32:44.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:44.310446 systemd[1]: Started systemd-udevd.service. Jul 11 00:32:44.317192 systemd[1]: Starting dracut-pre-trigger.service... Jul 11 00:32:44.327237 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Jul 11 00:32:44.358971 systemd[1]: Finished dracut-pre-trigger.service. Jul 11 00:32:44.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:44.360481 systemd[1]: Starting systemd-udev-trigger.service... Jul 11 00:32:44.401689 systemd[1]: Finished systemd-udev-trigger.service. Jul 11 00:32:44.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:44.434322 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:32:44.439152 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:32:44.439168 kernel: GPT:9289727 != 19775487 Jul 11 00:32:44.439177 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:32:44.439186 kernel: GPT:9289727 != 19775487 Jul 11 00:32:44.439193 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:32:44.439201 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:32:44.457668 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (539) Jul 11 00:32:44.462003 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 11 00:32:44.462850 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 11 00:32:44.467133 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 11 00:32:44.472159 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 11 00:32:44.475563 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 11 00:32:44.477077 systemd[1]: Starting disk-uuid.service... Jul 11 00:32:44.483008 disk-uuid[564]: Primary Header is updated. Jul 11 00:32:44.483008 disk-uuid[564]: Secondary Entries is updated. Jul 11 00:32:44.483008 disk-uuid[564]: Secondary Header is updated. Jul 11 00:32:44.488153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:32:45.497285 disk-uuid[565]: The operation has completed successfully. Jul 11 00:32:45.498223 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:32:45.514533 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:32:45.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.514623 systemd[1]: Finished disk-uuid.service. Jul 11 00:32:45.522661 systemd[1]: Starting verity-setup.service... Jul 11 00:32:45.541798 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 11 00:32:45.563850 systemd[1]: Found device dev-mapper-usr.device. Jul 11 00:32:45.565940 systemd[1]: Mounting sysusr-usr.mount... Jul 11 00:32:45.567871 systemd[1]: Finished verity-setup.service. Jul 11 00:32:45.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.613651 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 11 00:32:45.614070 systemd[1]: Mounted sysusr-usr.mount. Jul 11 00:32:45.614723 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 11 00:32:45.615356 systemd[1]: Starting ignition-setup.service... Jul 11 00:32:45.616977 systemd[1]: Starting parse-ip-for-networkd.service... Jul 11 00:32:45.623777 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:32:45.623807 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:32:45.623817 kernel: BTRFS info (device vda6): has skinny extents Jul 11 00:32:45.631433 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:32:45.636967 systemd[1]: Finished ignition-setup.service. Jul 11 00:32:45.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.638225 systemd[1]: Starting ignition-fetch-offline.service... Jul 11 00:32:45.698377 systemd[1]: Finished parse-ip-for-networkd.service. Jul 11 00:32:45.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.699000 audit: BPF prog-id=9 op=LOAD Jul 11 00:32:45.700531 systemd[1]: Starting systemd-networkd.service... Jul 11 00:32:45.731659 ignition[649]: Ignition 2.14.0 Jul 11 00:32:45.732480 ignition[649]: Stage: fetch-offline Jul 11 00:32:45.733222 ignition[649]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:32:45.734111 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:32:45.735299 ignition[649]: parsed url from cmdline: "" Jul 11 00:32:45.735355 ignition[649]: no config URL provided Jul 11 00:32:45.735793 systemd-networkd[741]: lo: Link UP Jul 11 00:32:45.735796 systemd-networkd[741]: lo: Gained carrier Jul 11 00:32:45.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.736384 systemd-networkd[741]: Enumeration completed Jul 11 00:32:45.736480 systemd[1]: Started systemd-networkd.service. Jul 11 00:32:45.738676 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:32:45.736756 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:32:45.738692 ignition[649]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:32:45.737672 systemd[1]: Reached target network.target. Jul 11 00:32:45.738715 ignition[649]: op(1): [started] loading QEMU firmware config module Jul 11 00:32:45.738207 systemd-networkd[741]: eth0: Link UP Jul 11 00:32:45.738720 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:32:45.738210 systemd-networkd[741]: eth0: Gained carrier Jul 11 00:32:45.739665 systemd[1]: Starting iscsiuio.service... Jul 11 00:32:45.747660 ignition[649]: op(1): [finished] loading QEMU firmware config module Jul 11 00:32:45.747677 ignition[649]: QEMU firmware config was not found. Ignoring... Jul 11 00:32:45.750855 systemd[1]: Started iscsiuio.service. Jul 11 00:32:45.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.752780 systemd[1]: Starting iscsid.service... Jul 11 00:32:45.753715 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:32:45.755941 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 11 00:32:45.755941 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 11 00:32:45.755941 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 11 00:32:45.755941 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 11 00:32:45.755941 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 11 00:32:45.755941 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 11 00:32:45.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.758624 systemd[1]: Started iscsid.service. Jul 11 00:32:45.764269 systemd[1]: Starting dracut-initqueue.service... Jul 11 00:32:45.773989 systemd[1]: Finished dracut-initqueue.service. Jul 11 00:32:45.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.774771 systemd[1]: Reached target remote-fs-pre.target. Jul 11 00:32:45.776138 systemd[1]: Reached target remote-cryptsetup.target. Jul 11 00:32:45.777554 systemd[1]: Reached target remote-fs.target. Jul 11 00:32:45.779678 systemd[1]: Starting dracut-pre-mount.service... Jul 11 00:32:45.786823 systemd[1]: Finished dracut-pre-mount.service. Jul 11 00:32:45.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.806714 ignition[649]: parsing config with SHA512: 3143c1bf23cbfda5a267a0771d798eb08cb69750bc625ab526458a64523d705eaee6957c16636a048b3e07df46b93c820c57eed70e20b58c4a883d17b0775b7a Jul 11 00:32:45.814025 unknown[649]: fetched base config from "system" Jul 11 00:32:45.814039 unknown[649]: fetched user config from "qemu" Jul 11 00:32:45.814620 ignition[649]: fetch-offline: fetch-offline passed Jul 11 00:32:45.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.815559 systemd[1]: Finished ignition-fetch-offline.service. Jul 11 00:32:45.814709 ignition[649]: Ignition finished successfully Jul 11 00:32:45.816815 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:32:45.817485 systemd[1]: Starting ignition-kargs.service... Jul 11 00:32:45.826166 ignition[762]: Ignition 2.14.0 Jul 11 00:32:45.826175 ignition[762]: Stage: kargs Jul 11 00:32:45.826273 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:32:45.828148 systemd[1]: Finished ignition-kargs.service. Jul 11 00:32:45.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.826282 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:32:45.827131 ignition[762]: kargs: kargs passed Jul 11 00:32:45.829922 systemd[1]: Starting ignition-disks.service... Jul 11 00:32:45.827171 ignition[762]: Ignition finished successfully Jul 11 00:32:45.835891 ignition[768]: Ignition 2.14.0 Jul 11 00:32:45.835900 ignition[768]: Stage: disks Jul 11 00:32:45.835981 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:32:45.835991 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:32:45.836911 ignition[768]: disks: disks passed Jul 11 00:32:45.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.837773 systemd[1]: Finished ignition-disks.service. Jul 11 00:32:45.836951 ignition[768]: Ignition finished successfully Jul 11 00:32:45.838933 systemd[1]: Reached target initrd-root-device.target. Jul 11 00:32:45.839846 systemd[1]: Reached target local-fs-pre.target. Jul 11 00:32:45.840810 systemd[1]: Reached target local-fs.target. Jul 11 00:32:45.841739 systemd[1]: Reached target sysinit.target. Jul 11 00:32:45.842711 systemd[1]: Reached target basic.target. Jul 11 00:32:45.844444 systemd[1]: Starting systemd-fsck-root.service... Jul 11 00:32:45.854796 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 11 00:32:45.858190 systemd[1]: Finished systemd-fsck-root.service. Jul 11 00:32:45.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.859545 systemd[1]: Mounting sysroot.mount... Jul 11 00:32:45.864439 systemd[1]: Mounted sysroot.mount. Jul 11 00:32:45.865550 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 11 00:32:45.865068 systemd[1]: Reached target initrd-root-fs.target. Jul 11 00:32:45.866999 systemd[1]: Mounting sysroot-usr.mount... Jul 11 00:32:45.867711 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 11 00:32:45.867745 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:32:45.867768 systemd[1]: Reached target ignition-diskful.target. Jul 11 00:32:45.869377 systemd[1]: Mounted sysroot-usr.mount. Jul 11 00:32:45.870914 systemd[1]: Starting initrd-setup-root.service... Jul 11 00:32:45.874976 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:32:45.878166 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:32:45.881413 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:32:45.885139 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:32:45.911661 systemd[1]: Finished initrd-setup-root.service. Jul 11 00:32:45.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.913009 systemd[1]: Starting ignition-mount.service... Jul 11 00:32:45.914111 systemd[1]: Starting sysroot-boot.service... Jul 11 00:32:45.918064 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Jul 11 00:32:45.926191 ignition[829]: INFO : Ignition 2.14.0 Jul 11 00:32:45.926191 ignition[829]: INFO : Stage: mount Jul 11 00:32:45.927574 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:32:45.927574 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:32:45.927574 ignition[829]: INFO : mount: mount passed Jul 11 00:32:45.927574 ignition[829]: INFO : Ignition finished successfully Jul 11 00:32:45.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:45.928181 systemd[1]: Finished ignition-mount.service. Jul 11 00:32:45.931941 systemd[1]: Finished sysroot-boot.service. Jul 11 00:32:45.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:46.574439 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 11 00:32:46.580962 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (838) Jul 11 00:32:46.580993 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:32:46.581002 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:32:46.581888 kernel: BTRFS info (device vda6): has skinny extents Jul 11 00:32:46.584615 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 11 00:32:46.585954 systemd[1]: Starting ignition-files.service... Jul 11 00:32:46.599338 ignition[858]: INFO : Ignition 2.14.0 Jul 11 00:32:46.599338 ignition[858]: INFO : Stage: files Jul 11 00:32:46.600573 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:32:46.600573 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:32:46.600573 ignition[858]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:32:46.604644 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:32:46.604644 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:32:46.606773 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:32:46.607707 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:32:46.607707 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:32:46.607264 unknown[858]: wrote ssh authorized keys file for user: core Jul 11 00:32:46.610754 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:32:46.610754 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:32:46.610754 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 00:32:46.610754 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 11 00:32:46.654317 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:32:46.777030 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 00:32:46.778620 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:32:46.778620 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 11 00:32:47.150224 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 11 00:32:47.399111 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 00:32:47.399111 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:32:47.401720 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 11 00:32:47.654822 systemd-networkd[741]: eth0: Gained IPv6LL Jul 11 00:32:47.763605 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 11 00:32:48.280080 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:32:48.280080 ignition[858]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:32:48.282777 ignition[858]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:32:48.326197 ignition[858]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:32:48.328271 ignition[858]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:32:48.328271 ignition[858]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:32:48.328271 ignition[858]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:32:48.328271 ignition[858]: INFO : files: files passed Jul 11 00:32:48.328271 ignition[858]: INFO : Ignition finished successfully Jul 11 00:32:48.336720 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 11 00:32:48.336742 kernel: audit: type=1130 audit(1752193968.329:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.328289 systemd[1]: Finished ignition-files.service. Jul 11 00:32:48.330823 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 11 00:32:48.335351 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 11 00:32:48.341181 initrd-setup-root-after-ignition[884]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 11 00:32:48.335993 systemd[1]: Starting ignition-quench.service... Jul 11 00:32:48.347041 kernel: audit: type=1130 audit(1752193968.341:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.347062 kernel: audit: type=1131 audit(1752193968.341:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.347155 initrd-setup-root-after-ignition[886]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:32:48.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.339501 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:32:48.351675 kernel: audit: type=1130 audit(1752193968.346:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.339582 systemd[1]: Finished ignition-quench.service. Jul 11 00:32:48.343309 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 11 00:32:48.347706 systemd[1]: Reached target ignition-complete.target. Jul 11 00:32:48.351694 systemd[1]: Starting initrd-parse-etc.service... Jul 11 00:32:48.363667 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:32:48.363753 systemd[1]: Finished initrd-parse-etc.service. Jul 11 00:32:48.368829 kernel: audit: type=1130 audit(1752193968.364:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.368848 kernel: audit: type=1131 audit(1752193968.364:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.364967 systemd[1]: Reached target initrd-fs.target. Jul 11 00:32:48.369356 systemd[1]: Reached target initrd.target. Jul 11 00:32:48.370343 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 11 00:32:48.371038 systemd[1]: Starting dracut-pre-pivot.service... Jul 11 00:32:48.380958 systemd[1]: Finished dracut-pre-pivot.service. Jul 11 00:32:48.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.382228 systemd[1]: Starting initrd-cleanup.service... Jul 11 00:32:48.384690 kernel: audit: type=1130 audit(1752193968.380:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.390055 systemd[1]: Stopped target nss-lookup.target. Jul 11 00:32:48.390734 systemd[1]: Stopped target remote-cryptsetup.target. Jul 11 00:32:48.391783 systemd[1]: Stopped target timers.target. Jul 11 00:32:48.392682 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:32:48.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.392795 systemd[1]: Stopped dracut-pre-pivot.service. Jul 11 00:32:48.396687 kernel: audit: type=1131 audit(1752193968.392:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.393717 systemd[1]: Stopped target initrd.target. Jul 11 00:32:48.396324 systemd[1]: Stopped target basic.target. Jul 11 00:32:48.397206 systemd[1]: Stopped target ignition-complete.target. Jul 11 00:32:48.398162 systemd[1]: Stopped target ignition-diskful.target. Jul 11 00:32:48.399210 systemd[1]: Stopped target initrd-root-device.target. Jul 11 00:32:48.400273 systemd[1]: Stopped target remote-fs.target. Jul 11 00:32:48.401232 systemd[1]: Stopped target remote-fs-pre.target. Jul 11 00:32:48.402252 systemd[1]: Stopped target sysinit.target. Jul 11 00:32:48.403188 systemd[1]: Stopped target local-fs.target. Jul 11 00:32:48.404127 systemd[1]: Stopped target local-fs-pre.target. Jul 11 00:32:48.405051 systemd[1]: Stopped target swap.target. Jul 11 00:32:48.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.405902 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:32:48.410319 kernel: audit: type=1131 audit(1752193968.406:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.406015 systemd[1]: Stopped dracut-pre-mount.service. Jul 11 00:32:48.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.407168 systemd[1]: Stopped target cryptsetup.target. Jul 11 00:32:48.414275 kernel: audit: type=1131 audit(1752193968.410:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.409815 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:32:48.409917 systemd[1]: Stopped dracut-initqueue.service. Jul 11 00:32:48.411135 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:32:48.411236 systemd[1]: Stopped ignition-fetch-offline.service. Jul 11 00:32:48.413969 systemd[1]: Stopped target paths.target. Jul 11 00:32:48.414946 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:32:48.416704 systemd[1]: Stopped systemd-ask-password-console.path. Jul 11 00:32:48.418485 systemd[1]: Stopped target slices.target. Jul 11 00:32:48.419527 systemd[1]: Stopped target sockets.target. Jul 11 00:32:48.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.420814 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:32:48.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.420888 systemd[1]: Closed iscsid.socket. Jul 11 00:32:48.422003 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:32:48.422105 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 11 00:32:48.423250 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:32:48.423340 systemd[1]: Stopped ignition-files.service. Jul 11 00:32:48.425188 systemd[1]: Stopping ignition-mount.service... Jul 11 00:32:48.427266 systemd[1]: Stopping iscsiuio.service... Jul 11 00:32:48.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.433815 ignition[899]: INFO : Ignition 2.14.0 Jul 11 00:32:48.433815 ignition[899]: INFO : Stage: umount Jul 11 00:32:48.433815 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:32:48.433815 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:32:48.433815 ignition[899]: INFO : umount: umount passed Jul 11 00:32:48.433815 ignition[899]: INFO : Ignition finished successfully Jul 11 00:32:48.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.427897 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:32:48.428016 systemd[1]: Stopped kmod-static-nodes.service. Jul 11 00:32:48.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.430242 systemd[1]: Stopping sysroot-boot.service... Jul 11 00:32:48.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.431249 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:32:48.431365 systemd[1]: Stopped systemd-udev-trigger.service. Jul 11 00:32:48.432691 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:32:48.432789 systemd[1]: Stopped dracut-pre-trigger.service. Jul 11 00:32:48.435020 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 11 00:32:48.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.435106 systemd[1]: Stopped iscsiuio.service. Jul 11 00:32:48.437290 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:32:48.437352 systemd[1]: Closed iscsiuio.socket. Jul 11 00:32:48.438592 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:32:48.438693 systemd[1]: Finished initrd-cleanup.service. Jul 11 00:32:48.439947 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:32:48.440025 systemd[1]: Stopped ignition-mount.service. Jul 11 00:32:48.441586 systemd[1]: Stopped target network.target. Jul 11 00:32:48.443698 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:32:48.443753 systemd[1]: Stopped ignition-disks.service. Jul 11 00:32:48.445037 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:32:48.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.445078 systemd[1]: Stopped ignition-kargs.service. Jul 11 00:32:48.446340 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:32:48.446374 systemd[1]: Stopped ignition-setup.service. Jul 11 00:32:48.447905 systemd[1]: Stopping systemd-networkd.service... Jul 11 00:32:48.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.448815 systemd[1]: Stopping systemd-resolved.service... Jul 11 00:32:48.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.450532 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:32:48.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.453733 systemd-networkd[741]: eth0: DHCPv6 lease lost Jul 11 00:32:48.466000 audit: BPF prog-id=9 op=UNLOAD Jul 11 00:32:48.455298 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:32:48.455400 systemd[1]: Stopped systemd-networkd.service. Jul 11 00:32:48.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.456756 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:32:48.456784 systemd[1]: Closed systemd-networkd.socket. Jul 11 00:32:48.458432 systemd[1]: Stopping network-cleanup.service... Jul 11 00:32:48.459392 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:32:48.459445 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 11 00:32:48.460486 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:32:48.460524 systemd[1]: Stopped systemd-sysctl.service. Jul 11 00:32:48.473000 audit: BPF prog-id=6 op=UNLOAD Jul 11 00:32:48.462019 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:32:48.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.462060 systemd[1]: Stopped systemd-modules-load.service. Jul 11 00:32:48.463286 systemd[1]: Stopping systemd-udevd.service... Jul 11 00:32:48.468781 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 00:32:48.469251 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:32:48.469341 systemd[1]: Stopped systemd-resolved.service. Jul 11 00:32:48.474273 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:32:48.474370 systemd[1]: Stopped network-cleanup.service. Jul 11 00:32:48.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.476955 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:32:48.480330 systemd[1]: Stopped systemd-udevd.service. Jul 11 00:32:48.481671 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:32:48.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.481707 systemd[1]: Closed systemd-udevd-control.socket. Jul 11 00:32:48.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.482506 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:32:48.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.482534 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 11 00:32:48.483540 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:32:48.483582 systemd[1]: Stopped dracut-pre-udev.service. Jul 11 00:32:48.485462 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:32:48.485502 systemd[1]: Stopped dracut-cmdline.service. Jul 11 00:32:48.486521 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:32:48.486556 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 11 00:32:48.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.488211 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 11 00:32:48.489170 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:32:48.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.489216 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 11 00:32:48.494668 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:32:48.494752 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 11 00:32:48.504289 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:32:48.504382 systemd[1]: Stopped sysroot-boot.service. Jul 11 00:32:48.505523 systemd[1]: Reached target initrd-switch-root.target. Jul 11 00:32:48.506383 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:32:48.506426 systemd[1]: Stopped initrd-setup-root.service. Jul 11 00:32:48.508101 systemd[1]: Starting initrd-switch-root.service... Jul 11 00:32:48.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:48.513557 systemd[1]: Switching root. Jul 11 00:32:48.514000 audit: BPF prog-id=8 op=UNLOAD Jul 11 00:32:48.514000 audit: BPF prog-id=7 op=UNLOAD Jul 11 00:32:48.515000 audit: BPF prog-id=5 op=UNLOAD Jul 11 00:32:48.515000 audit: BPF prog-id=4 op=UNLOAD Jul 11 00:32:48.515000 audit: BPF prog-id=3 op=UNLOAD Jul 11 00:32:48.532809 iscsid[748]: iscsid shutting down. Jul 11 00:32:48.533309 systemd-journald[291]: Journal stopped Jul 11 00:32:50.705342 systemd-journald[291]: Received SIGTERM from PID 1 (systemd). Jul 11 00:32:50.705414 kernel: SELinux: Class mctp_socket not defined in policy. Jul 11 00:32:50.705431 kernel: SELinux: Class anon_inode not defined in policy. Jul 11 00:32:50.705441 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 11 00:32:50.705452 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:32:50.705461 kernel: SELinux: policy capability open_perms=1 Jul 11 00:32:50.705471 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:32:50.705480 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:32:50.705489 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:32:50.705502 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:32:50.705513 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:32:50.705526 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:32:50.705536 systemd[1]: Successfully loaded SELinux policy in 36.853ms. Jul 11 00:32:50.705552 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.891ms. Jul 11 00:32:50.705565 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 11 00:32:50.705577 systemd[1]: Detected virtualization kvm. Jul 11 00:32:50.705588 systemd[1]: Detected architecture arm64. Jul 11 00:32:50.705599 systemd[1]: Detected first boot. Jul 11 00:32:50.705609 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:32:50.705620 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 11 00:32:50.707382 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:32:50.707421 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:32:50.707438 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:32:50.707450 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:32:50.707462 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:32:50.707472 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 11 00:32:50.707483 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 11 00:32:50.707494 systemd[1]: Created slice system-addon\x2drun.slice. Jul 11 00:32:50.707504 systemd[1]: Created slice system-getty.slice. Jul 11 00:32:50.707516 systemd[1]: Created slice system-modprobe.slice. Jul 11 00:32:50.707527 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 11 00:32:50.707539 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 11 00:32:50.707550 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 11 00:32:50.707560 systemd[1]: Created slice user.slice. Jul 11 00:32:50.707570 systemd[1]: Started systemd-ask-password-console.path. Jul 11 00:32:50.707580 systemd[1]: Started systemd-ask-password-wall.path. Jul 11 00:32:50.707590 systemd[1]: Set up automount boot.automount. Jul 11 00:32:50.707600 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 11 00:32:50.707612 systemd[1]: Reached target integritysetup.target. Jul 11 00:32:50.707622 systemd[1]: Reached target remote-cryptsetup.target. Jul 11 00:32:50.707647 systemd[1]: Reached target remote-fs.target. Jul 11 00:32:50.707658 systemd[1]: Reached target slices.target. Jul 11 00:32:50.707669 systemd[1]: Reached target swap.target. Jul 11 00:32:50.707679 systemd[1]: Reached target torcx.target. Jul 11 00:32:50.707689 systemd[1]: Reached target veritysetup.target. Jul 11 00:32:50.707699 systemd[1]: Listening on systemd-coredump.socket. Jul 11 00:32:50.707709 systemd[1]: Listening on systemd-initctl.socket. Jul 11 00:32:50.707723 systemd[1]: Listening on systemd-journald-audit.socket. Jul 11 00:32:50.707733 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 11 00:32:50.707743 systemd[1]: Listening on systemd-journald.socket. Jul 11 00:32:50.707755 systemd[1]: Listening on systemd-networkd.socket. Jul 11 00:32:50.707766 systemd[1]: Listening on systemd-udevd-control.socket. Jul 11 00:32:50.707776 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 11 00:32:50.707787 systemd[1]: Listening on systemd-userdbd.socket. Jul 11 00:32:50.707796 systemd[1]: Mounting dev-hugepages.mount... Jul 11 00:32:50.707806 systemd[1]: Mounting dev-mqueue.mount... Jul 11 00:32:50.707822 systemd[1]: Mounting media.mount... Jul 11 00:32:50.707832 systemd[1]: Mounting sys-kernel-debug.mount... Jul 11 00:32:50.707842 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 11 00:32:50.707851 systemd[1]: Mounting tmp.mount... Jul 11 00:32:50.707862 systemd[1]: Starting flatcar-tmpfiles.service... Jul 11 00:32:50.707873 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:32:50.707883 systemd[1]: Starting kmod-static-nodes.service... Jul 11 00:32:50.707893 systemd[1]: Starting modprobe@configfs.service... Jul 11 00:32:50.707903 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:32:50.707915 systemd[1]: Starting modprobe@drm.service... Jul 11 00:32:50.707925 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:32:50.707935 systemd[1]: Starting modprobe@fuse.service... Jul 11 00:32:50.707945 systemd[1]: Starting modprobe@loop.service... Jul 11 00:32:50.707956 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:32:50.707966 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 11 00:32:50.707978 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 11 00:32:50.707988 systemd[1]: Starting systemd-journald.service... Jul 11 00:32:50.707998 systemd[1]: Starting systemd-modules-load.service... Jul 11 00:32:50.708009 systemd[1]: Starting systemd-network-generator.service... Jul 11 00:32:50.708019 systemd[1]: Starting systemd-remount-fs.service... Jul 11 00:32:50.708029 kernel: fuse: init (API version 7.34) Jul 11 00:32:50.708040 systemd[1]: Starting systemd-udev-trigger.service... Jul 11 00:32:50.708052 systemd[1]: Mounted dev-hugepages.mount. Jul 11 00:32:50.708062 systemd[1]: Mounted dev-mqueue.mount. Jul 11 00:32:50.708072 kernel: loop: module loaded Jul 11 00:32:50.708081 systemd[1]: Mounted media.mount. Jul 11 00:32:50.708091 systemd[1]: Mounted sys-kernel-debug.mount. Jul 11 00:32:50.708101 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 11 00:32:50.708111 systemd[1]: Mounted tmp.mount. Jul 11 00:32:50.708125 systemd-journald[1026]: Journal started Jul 11 00:32:50.708177 systemd-journald[1026]: Runtime Journal (/run/log/journal/f88c0be94e2148798ca7a4eca58e191e) is 6.0M, max 48.7M, 42.6M free. Jul 11 00:32:50.626000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 11 00:32:50.626000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 11 00:32:50.701000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 11 00:32:50.701000 audit[1026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffffc60a50 a2=4000 a3=1 items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:32:50.701000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 11 00:32:50.710774 systemd[1]: Finished kmod-static-nodes.service. Jul 11 00:32:50.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.711663 systemd[1]: Started systemd-journald.service. Jul 11 00:32:50.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.713013 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:32:50.713245 systemd[1]: Finished modprobe@configfs.service. Jul 11 00:32:50.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.714253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:32:50.714456 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:32:50.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.715317 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:32:50.715519 systemd[1]: Finished modprobe@drm.service. Jul 11 00:32:50.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.716421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:32:50.716614 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:32:50.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.717455 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:32:50.717665 systemd[1]: Finished modprobe@fuse.service. Jul 11 00:32:50.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.718463 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:32:50.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.722387 systemd[1]: Finished modprobe@loop.service. Jul 11 00:32:50.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.723479 systemd[1]: Finished systemd-modules-load.service. Jul 11 00:32:50.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.724538 systemd[1]: Finished systemd-network-generator.service. Jul 11 00:32:50.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.725643 systemd[1]: Finished systemd-remount-fs.service. Jul 11 00:32:50.726602 systemd[1]: Reached target network-pre.target. Jul 11 00:32:50.728506 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 11 00:32:50.730574 systemd[1]: Mounting sys-kernel-config.mount... Jul 11 00:32:50.731172 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:32:50.732834 systemd[1]: Starting systemd-hwdb-update.service... Jul 11 00:32:50.734523 systemd[1]: Starting systemd-journal-flush.service... Jul 11 00:32:50.735412 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:32:50.736444 systemd[1]: Starting systemd-random-seed.service... Jul 11 00:32:50.737259 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:32:50.738286 systemd[1]: Starting systemd-sysctl.service... Jul 11 00:32:50.743245 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 11 00:32:50.747534 systemd-journald[1026]: Time spent on flushing to /var/log/journal/f88c0be94e2148798ca7a4eca58e191e is 12.522ms for 931 entries. Jul 11 00:32:50.747534 systemd-journald[1026]: System Journal (/var/log/journal/f88c0be94e2148798ca7a4eca58e191e) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:32:50.766220 systemd-journald[1026]: Received client request to flush runtime journal. Jul 11 00:32:50.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.744153 systemd[1]: Mounted sys-kernel-config.mount. Jul 11 00:32:50.746908 systemd[1]: Finished flatcar-tmpfiles.service. Jul 11 00:32:50.748615 systemd[1]: Starting systemd-sysusers.service... Jul 11 00:32:50.758359 systemd[1]: Finished systemd-random-seed.service. Jul 11 00:32:50.759199 systemd[1]: Reached target first-boot-complete.target. Jul 11 00:32:50.760108 systemd[1]: Finished systemd-udev-trigger.service. Jul 11 00:32:50.761968 systemd[1]: Starting systemd-udev-settle.service... Jul 11 00:32:50.768474 systemd[1]: Finished systemd-journal-flush.service. Jul 11 00:32:50.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.771154 udevadm[1082]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 11 00:32:50.775974 systemd[1]: Finished systemd-sysusers.service. Jul 11 00:32:50.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.776897 systemd[1]: Finished systemd-sysctl.service. Jul 11 00:32:50.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:50.778574 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 11 00:32:50.795806 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 11 00:32:50.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.113257 systemd[1]: Finished systemd-hwdb-update.service. Jul 11 00:32:51.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.115160 systemd[1]: Starting systemd-udevd.service... Jul 11 00:32:51.133455 systemd-udevd[1092]: Using default interface naming scheme 'v252'. Jul 11 00:32:51.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.157600 systemd[1]: Started systemd-udevd.service. Jul 11 00:32:51.159773 systemd[1]: Starting systemd-networkd.service... Jul 11 00:32:51.177953 systemd[1]: Found device dev-ttyAMA0.device. Jul 11 00:32:51.188103 systemd[1]: Starting systemd-userdbd.service... Jul 11 00:32:51.236568 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 11 00:32:51.241069 systemd[1]: Started systemd-userdbd.service. Jul 11 00:32:51.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.249030 systemd[1]: Finished systemd-udev-settle.service. Jul 11 00:32:51.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.251172 systemd[1]: Starting lvm2-activation-early.service... Jul 11 00:32:51.278613 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:32:51.297458 systemd-networkd[1094]: lo: Link UP Jul 11 00:32:51.297747 systemd-networkd[1094]: lo: Gained carrier Jul 11 00:32:51.298170 systemd-networkd[1094]: Enumeration completed Jul 11 00:32:51.298366 systemd-networkd[1094]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:32:51.298387 systemd[1]: Started systemd-networkd.service. Jul 11 00:32:51.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.299807 systemd-networkd[1094]: eth0: Link UP Jul 11 00:32:51.299893 systemd-networkd[1094]: eth0: Gained carrier Jul 11 00:32:51.306558 systemd[1]: Finished lvm2-activation-early.service. Jul 11 00:32:51.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.307568 systemd[1]: Reached target cryptsetup.target. Jul 11 00:32:51.309658 systemd[1]: Starting lvm2-activation.service... Jul 11 00:32:51.313243 lvm[1128]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:32:51.319792 systemd-networkd[1094]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:32:51.337744 systemd[1]: Finished lvm2-activation.service. Jul 11 00:32:51.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.338493 systemd[1]: Reached target local-fs-pre.target. Jul 11 00:32:51.339158 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:32:51.339187 systemd[1]: Reached target local-fs.target. Jul 11 00:32:51.339767 systemd[1]: Reached target machines.target. Jul 11 00:32:51.341628 systemd[1]: Starting ldconfig.service... Jul 11 00:32:51.342664 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.342730 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:32:51.343934 systemd[1]: Starting systemd-boot-update.service... Jul 11 00:32:51.345539 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 11 00:32:51.347570 systemd[1]: Starting systemd-machine-id-commit.service... Jul 11 00:32:51.349417 systemd[1]: Starting systemd-sysext.service... Jul 11 00:32:51.350537 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1131 (bootctl) Jul 11 00:32:51.351614 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 11 00:32:51.366510 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 11 00:32:51.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.372595 systemd[1]: Unmounting usr-share-oem.mount... Jul 11 00:32:51.377511 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 11 00:32:51.377784 systemd[1]: Unmounted usr-share-oem.mount. Jul 11 00:32:51.418270 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:32:51.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.419263 systemd[1]: Finished systemd-machine-id-commit.service. Jul 11 00:32:51.421659 kernel: loop0: detected capacity change from 0 to 203944 Jul 11 00:32:51.433209 systemd-fsck[1143]: fsck.fat 4.2 (2021-01-31) Jul 11 00:32:51.433209 systemd-fsck[1143]: /dev/vda1: 236 files, 117310/258078 clusters Jul 11 00:32:51.436021 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 11 00:32:51.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.439170 systemd[1]: Mounting boot.mount... Jul 11 00:32:51.442836 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:32:51.446684 systemd[1]: Mounted boot.mount. Jul 11 00:32:51.454555 systemd[1]: Finished systemd-boot-update.service. Jul 11 00:32:51.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.462685 kernel: loop1: detected capacity change from 0 to 203944 Jul 11 00:32:51.467697 (sd-sysext)[1152]: Using extensions 'kubernetes'. Jul 11 00:32:51.468007 (sd-sysext)[1152]: Merged extensions into '/usr'. Jul 11 00:32:51.484831 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.486367 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:32:51.488760 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:32:51.490757 systemd[1]: Starting modprobe@loop.service... Jul 11 00:32:51.491954 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.492146 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:32:51.492944 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:32:51.493090 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:32:51.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.494284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:32:51.494422 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:32:51.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.495739 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:32:51.495934 systemd[1]: Finished modprobe@loop.service. Jul 11 00:32:51.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.497183 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:32:51.497292 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.555018 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:32:51.558824 systemd[1]: Finished ldconfig.service. Jul 11 00:32:51.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.698128 systemd[1]: Mounting usr-share-oem.mount... Jul 11 00:32:51.703240 systemd[1]: Mounted usr-share-oem.mount. Jul 11 00:32:51.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.704910 systemd[1]: Finished systemd-sysext.service. Jul 11 00:32:51.706801 systemd[1]: Starting ensure-sysext.service... Jul 11 00:32:51.708414 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 11 00:32:51.713055 systemd[1]: Reloading. Jul 11 00:32:51.717679 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 11 00:32:51.718460 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:32:51.719769 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:32:51.743479 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-07-11T00:32:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:32:51.743510 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-07-11T00:32:51Z" level=info msg="torcx already run" Jul 11 00:32:51.806846 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:32:51.806864 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:32:51.822370 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:32:51.867846 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 11 00:32:51.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.871578 systemd[1]: Starting audit-rules.service... Jul 11 00:32:51.873261 systemd[1]: Starting clean-ca-certificates.service... Jul 11 00:32:51.875070 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 11 00:32:51.877413 systemd[1]: Starting systemd-resolved.service... Jul 11 00:32:51.879559 systemd[1]: Starting systemd-timesyncd.service... Jul 11 00:32:51.881860 systemd[1]: Starting systemd-update-utmp.service... Jul 11 00:32:51.883170 systemd[1]: Finished clean-ca-certificates.service. Jul 11 00:32:51.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.886441 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:32:51.886000 audit[1239]: SYSTEM_BOOT pid=1239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.890574 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.892113 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:32:51.893992 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:32:51.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.896028 systemd[1]: Starting modprobe@loop.service... Jul 11 00:32:51.896624 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.896789 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:32:51.896923 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:32:51.898006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:32:51.898143 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:32:51.899302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:32:51.899429 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:32:51.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.903965 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 11 00:32:51.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.905293 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:32:51.905448 systemd[1]: Finished modprobe@loop.service. Jul 11 00:32:51.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.906574 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:32:51.906717 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.907934 systemd[1]: Starting systemd-update-done.service... Jul 11 00:32:51.909276 systemd[1]: Finished systemd-update-utmp.service. Jul 11 00:32:51.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.912289 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.913459 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:32:51.915176 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:32:51.916909 systemd[1]: Starting modprobe@loop.service... Jul 11 00:32:51.917523 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.917682 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:32:51.917781 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:32:51.918605 systemd[1]: Finished systemd-update-done.service. Jul 11 00:32:51.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.919840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:32:51.919966 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:32:51.921095 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:32:51.921235 systemd[1]: Finished modprobe@loop.service. Jul 11 00:32:51.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.922267 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.924623 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.925988 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:32:51.927846 systemd[1]: Starting modprobe@drm.service... Jul 11 00:32:51.929749 systemd[1]: Starting modprobe@loop.service... Jul 11 00:32:51.930369 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.930480 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:32:51.931873 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 11 00:32:51.932725 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:32:51.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.933848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:32:51.934012 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:32:51.935225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:32:51.935367 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:32:51.936368 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:32:51.941849 systemd[1]: Finished modprobe@drm.service. Jul 11 00:32:51.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.943056 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:32:51.943242 systemd[1]: Finished modprobe@loop.service. Jul 11 00:32:51.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.944393 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:32:51.944485 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.945532 systemd[1]: Finished ensure-sysext.service. Jul 11 00:32:51.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:32:51.962000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 11 00:32:51.962000 audit[1279]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff9beac70 a2=420 a3=0 items=0 ppid=1232 pid=1279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:32:51.962000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 11 00:32:51.963148 augenrules[1279]: No rules Jul 11 00:32:51.964169 systemd[1]: Finished audit-rules.service. Jul 11 00:32:51.969263 systemd-resolved[1237]: Positive Trust Anchors: Jul 11 00:32:51.969272 systemd-resolved[1237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:32:51.969298 systemd-resolved[1237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 11 00:32:51.969893 systemd[1]: Started systemd-timesyncd.service. Jul 11 00:32:51.970784 systemd-timesyncd[1238]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:32:51.970831 systemd-timesyncd[1238]: Initial clock synchronization to Fri 2025-07-11 00:32:51.702195 UTC. Jul 11 00:32:51.970901 systemd[1]: Reached target time-set.target. Jul 11 00:32:51.980621 systemd-resolved[1237]: Defaulting to hostname 'linux'. Jul 11 00:32:51.982036 systemd[1]: Started systemd-resolved.service. Jul 11 00:32:51.982689 systemd[1]: Reached target network.target. Jul 11 00:32:51.983221 systemd[1]: Reached target nss-lookup.target. Jul 11 00:32:51.983803 systemd[1]: Reached target sysinit.target. Jul 11 00:32:51.984397 systemd[1]: Started motdgen.path. Jul 11 00:32:51.984927 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 11 00:32:51.985815 systemd[1]: Started logrotate.timer. Jul 11 00:32:51.986406 systemd[1]: Started mdadm.timer. Jul 11 00:32:51.986905 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 11 00:32:51.987481 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:32:51.987508 systemd[1]: Reached target paths.target. Jul 11 00:32:51.988103 systemd[1]: Reached target timers.target. Jul 11 00:32:51.988935 systemd[1]: Listening on dbus.socket. Jul 11 00:32:51.990594 systemd[1]: Starting docker.socket... Jul 11 00:32:51.994874 systemd[1]: Listening on sshd.socket. Jul 11 00:32:51.995519 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:32:51.995916 systemd[1]: Listening on docker.socket. Jul 11 00:32:51.996522 systemd[1]: Reached target sockets.target. Jul 11 00:32:51.997151 systemd[1]: Reached target basic.target. Jul 11 00:32:51.997828 systemd[1]: System is tainted: cgroupsv1 Jul 11 00:32:51.997877 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.997897 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 11 00:32:51.998916 systemd[1]: Starting containerd.service... Jul 11 00:32:52.000508 systemd[1]: Starting dbus.service... Jul 11 00:32:52.002195 systemd[1]: Starting enable-oem-cloudinit.service... Jul 11 00:32:52.003993 systemd[1]: Starting extend-filesystems.service... Jul 11 00:32:52.004677 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 11 00:32:52.005991 systemd[1]: Starting motdgen.service... Jul 11 00:32:52.007720 systemd[1]: Starting prepare-helm.service... Jul 11 00:32:52.009285 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 11 00:32:52.011510 systemd[1]: Starting sshd-keygen.service... Jul 11 00:32:52.014036 systemd[1]: Starting systemd-logind.service... Jul 11 00:32:52.014628 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:32:52.014749 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:32:52.015721 jq[1291]: false Jul 11 00:32:52.015794 systemd[1]: Starting update-engine.service... Jul 11 00:32:52.017717 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 11 00:32:52.020814 jq[1307]: true Jul 11 00:32:52.020821 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:32:52.026464 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 11 00:32:52.027801 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:32:52.028013 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 11 00:32:52.034239 extend-filesystems[1292]: Found loop1 Jul 11 00:32:52.036176 jq[1318]: true Jul 11 00:32:52.036436 extend-filesystems[1292]: Found vda Jul 11 00:32:52.044066 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:32:52.044286 systemd[1]: Finished motdgen.service. Jul 11 00:32:52.045758 extend-filesystems[1292]: Found vda1 Jul 11 00:32:52.045758 extend-filesystems[1292]: Found vda2 Jul 11 00:32:52.045758 extend-filesystems[1292]: Found vda3 Jul 11 00:32:52.045758 extend-filesystems[1292]: Found usr Jul 11 00:32:52.048459 extend-filesystems[1292]: Found vda4 Jul 11 00:32:52.048459 extend-filesystems[1292]: Found vda6 Jul 11 00:32:52.048459 extend-filesystems[1292]: Found vda7 Jul 11 00:32:52.048459 extend-filesystems[1292]: Found vda9 Jul 11 00:32:52.048459 extend-filesystems[1292]: Checking size of /dev/vda9 Jul 11 00:32:52.052712 tar[1312]: linux-arm64/helm Jul 11 00:32:52.057287 dbus-daemon[1289]: [system] SELinux support is enabled Jul 11 00:32:52.057458 systemd[1]: Started dbus.service. Jul 11 00:32:52.059826 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:32:52.059848 systemd[1]: Reached target system-config.target. Jul 11 00:32:52.060678 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:32:52.060706 systemd[1]: Reached target user-config.target. Jul 11 00:32:52.078052 extend-filesystems[1292]: Resized partition /dev/vda9 Jul 11 00:32:52.096127 systemd-logind[1301]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 00:32:52.098955 extend-filesystems[1347]: resize2fs 1.46.5 (30-Dec-2021) Jul 11 00:32:52.102913 systemd-logind[1301]: New seat seat0. Jul 11 00:32:52.105309 systemd[1]: Started systemd-logind.service. Jul 11 00:32:52.113652 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:32:52.132330 update_engine[1302]: I0711 00:32:52.132088 1302 main.cc:92] Flatcar Update Engine starting Jul 11 00:32:52.134606 systemd[1]: Started update-engine.service. Jul 11 00:32:52.134711 update_engine[1302]: I0711 00:32:52.134622 1302 update_check_scheduler.cc:74] Next update check in 11m25s Jul 11 00:32:52.137507 systemd[1]: Started locksmithd.service. Jul 11 00:32:52.148683 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:32:52.173651 env[1319]: time="2025-07-11T00:32:52.171398313Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 11 00:32:52.174104 extend-filesystems[1347]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:32:52.174104 extend-filesystems[1347]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:32:52.174104 extend-filesystems[1347]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:32:52.178610 extend-filesystems[1292]: Resized filesystem in /dev/vda9 Jul 11 00:32:52.179269 bash[1348]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:32:52.175090 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:32:52.175334 systemd[1]: Finished extend-filesystems.service. Jul 11 00:32:52.176902 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 11 00:32:52.197380 env[1319]: time="2025-07-11T00:32:52.197321691Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:32:52.197685 env[1319]: time="2025-07-11T00:32:52.197665201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:32:52.200814 env[1319]: time="2025-07-11T00:32:52.199934622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:32:52.200814 env[1319]: time="2025-07-11T00:32:52.199973008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:32:52.200814 env[1319]: time="2025-07-11T00:32:52.200230931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:32:52.200814 env[1319]: time="2025-07-11T00:32:52.200248597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:32:52.200814 env[1319]: time="2025-07-11T00:32:52.200263132Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 11 00:32:52.200814 env[1319]: time="2025-07-11T00:32:52.200282886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:32:52.200814 env[1319]: time="2025-07-11T00:32:52.200356335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:32:52.200814 env[1319]: time="2025-07-11T00:32:52.200704716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:32:52.201035 env[1319]: time="2025-07-11T00:32:52.200858070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:32:52.201035 env[1319]: time="2025-07-11T00:32:52.200875543Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:32:52.201035 env[1319]: time="2025-07-11T00:32:52.200926261Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 11 00:32:52.201035 env[1319]: time="2025-07-11T00:32:52.200937858Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.205734120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.205768873Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.205782558Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.205814682Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.205830879Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.205844757Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.205857592Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.206187262Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.206209142Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.206223484Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.206235352Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.206249423Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.206363462Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:32:52.208647 env[1319]: time="2025-07-11T00:32:52.206428175Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:32:52.208438 systemd[1]: Started containerd.service. Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206714394Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206739753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206752742Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206854527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206866318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206877528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206887695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206899022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206910465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206921907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206932963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.206944676Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.207086162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.207102592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209009 env[1319]: time="2025-07-11T00:32:52.207113802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209281 env[1319]: time="2025-07-11T00:32:52.207125400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:32:52.209281 env[1319]: time="2025-07-11T00:32:52.207138505Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 11 00:32:52.209281 env[1319]: time="2025-07-11T00:32:52.207150604Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:32:52.209281 env[1319]: time="2025-07-11T00:32:52.207167845Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 11 00:32:52.209281 env[1319]: time="2025-07-11T00:32:52.207199815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:32:52.209414 env[1319]: time="2025-07-11T00:32:52.207384559Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:32:52.209414 env[1319]: time="2025-07-11T00:32:52.207436166Z" level=info msg="Connect containerd service" Jul 11 00:32:52.209414 env[1319]: time="2025-07-11T00:32:52.207466976Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:32:52.209414 env[1319]: time="2025-07-11T00:32:52.207987846Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:32:52.209414 env[1319]: time="2025-07-11T00:32:52.208269774Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:32:52.209414 env[1319]: time="2025-07-11T00:32:52.208302363Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:32:52.209414 env[1319]: time="2025-07-11T00:32:52.208344499Z" level=info msg="containerd successfully booted in 0.056382s" Jul 11 00:32:52.212521 env[1319]: time="2025-07-11T00:32:52.211966233Z" level=info msg="Start subscribing containerd event" Jul 11 00:32:52.212655 env[1319]: time="2025-07-11T00:32:52.212637480Z" level=info msg="Start recovering state" Jul 11 00:32:52.212766 env[1319]: time="2025-07-11T00:32:52.212752795Z" level=info msg="Start event monitor" Jul 11 00:32:52.212853 env[1319]: time="2025-07-11T00:32:52.212827907Z" level=info msg="Start snapshots syncer" Jul 11 00:32:52.212924 env[1319]: time="2025-07-11T00:32:52.212910015Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:32:52.213005 env[1319]: time="2025-07-11T00:32:52.212964947Z" level=info msg="Start streaming server" Jul 11 00:32:52.224200 locksmithd[1350]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:32:52.447054 tar[1312]: linux-arm64/LICENSE Jul 11 00:32:52.447180 tar[1312]: linux-arm64/README.md Jul 11 00:32:52.451358 systemd[1]: Finished prepare-helm.service. Jul 11 00:32:52.455832 systemd-networkd[1094]: eth0: Gained IPv6LL Jul 11 00:32:52.457513 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 11 00:32:52.458605 systemd[1]: Reached target network-online.target. Jul 11 00:32:52.461156 systemd[1]: Starting kubelet.service... Jul 11 00:32:53.067778 systemd[1]: Started kubelet.service. Jul 11 00:32:53.524040 sshd_keygen[1317]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:32:53.543744 systemd[1]: Finished sshd-keygen.service. Jul 11 00:32:53.546155 systemd[1]: Starting issuegen.service... Jul 11 00:32:53.550832 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:32:53.551077 systemd[1]: Finished issuegen.service. Jul 11 00:32:53.553094 systemd[1]: Starting systemd-user-sessions.service... Jul 11 00:32:53.559437 systemd[1]: Finished systemd-user-sessions.service. Jul 11 00:32:53.561830 systemd[1]: Started getty@tty1.service. Jul 11 00:32:53.563617 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 11 00:32:53.564388 systemd[1]: Reached target getty.target. Jul 11 00:32:53.565067 systemd[1]: Reached target multi-user.target. Jul 11 00:32:53.566849 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 11 00:32:53.574931 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 11 00:32:53.575094 kubelet[1375]: E0711 00:32:53.575043 1375 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:32:53.575177 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 11 00:32:53.576047 systemd[1]: Startup finished in 5.641s (kernel) + 4.982s (userspace) = 10.624s. Jul 11 00:32:53.577293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:32:53.577405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:32:56.546778 systemd[1]: Created slice system-sshd.slice. Jul 11 00:32:56.548040 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:50450.service. Jul 11 00:32:56.601210 sshd[1401]: Accepted publickey for core from 10.0.0.1 port 50450 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:32:56.603953 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:32:56.612490 systemd[1]: Created slice user-500.slice. Jul 11 00:32:56.613684 systemd[1]: Starting user-runtime-dir@500.service... Jul 11 00:32:56.615756 systemd-logind[1301]: New session 1 of user core. Jul 11 00:32:56.625754 systemd[1]: Finished user-runtime-dir@500.service. Jul 11 00:32:56.627159 systemd[1]: Starting user@500.service... Jul 11 00:32:56.630713 (systemd)[1406]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:32:56.693034 systemd[1406]: Queued start job for default target default.target. Jul 11 00:32:56.693274 systemd[1406]: Reached target paths.target. Jul 11 00:32:56.693289 systemd[1406]: Reached target sockets.target. Jul 11 00:32:56.693302 systemd[1406]: Reached target timers.target. Jul 11 00:32:56.693314 systemd[1406]: Reached target basic.target. Jul 11 00:32:56.693355 systemd[1406]: Reached target default.target. Jul 11 00:32:56.693378 systemd[1406]: Startup finished in 56ms. Jul 11 00:32:56.693607 systemd[1]: Started user@500.service. Jul 11 00:32:56.695510 systemd[1]: Started session-1.scope. Jul 11 00:32:56.746888 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:50454.service. Jul 11 00:32:56.781118 sshd[1415]: Accepted publickey for core from 10.0.0.1 port 50454 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:32:56.782798 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:32:56.786568 systemd-logind[1301]: New session 2 of user core. Jul 11 00:32:56.788532 systemd[1]: Started session-2.scope. Jul 11 00:32:56.841965 sshd[1415]: pam_unix(sshd:session): session closed for user core Jul 11 00:32:56.843995 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:50456.service. Jul 11 00:32:56.844946 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:50454.service: Deactivated successfully. Jul 11 00:32:56.845948 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:32:56.846330 systemd-logind[1301]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:32:56.847118 systemd-logind[1301]: Removed session 2. Jul 11 00:32:56.879277 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 50456 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:32:56.880645 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:32:56.883814 systemd-logind[1301]: New session 3 of user core. Jul 11 00:32:56.884549 systemd[1]: Started session-3.scope. Jul 11 00:32:56.932742 sshd[1420]: pam_unix(sshd:session): session closed for user core Jul 11 00:32:56.935005 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:50462.service. Jul 11 00:32:56.935831 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:50456.service: Deactivated successfully. Jul 11 00:32:56.936887 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:32:56.937330 systemd-logind[1301]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:32:56.938084 systemd-logind[1301]: Removed session 3. Jul 11 00:32:56.967190 sshd[1427]: Accepted publickey for core from 10.0.0.1 port 50462 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:32:56.968244 sshd[1427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:32:56.972033 systemd-logind[1301]: New session 4 of user core. Jul 11 00:32:56.972776 systemd[1]: Started session-4.scope. Jul 11 00:32:57.025946 sshd[1427]: pam_unix(sshd:session): session closed for user core Jul 11 00:32:57.028039 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:50472.service. Jul 11 00:32:57.028532 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:50462.service: Deactivated successfully. Jul 11 00:32:57.029403 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:32:57.029520 systemd-logind[1301]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:32:57.030485 systemd-logind[1301]: Removed session 4. Jul 11 00:32:57.067983 sshd[1434]: Accepted publickey for core from 10.0.0.1 port 50472 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:32:57.069657 sshd[1434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:32:57.072700 systemd-logind[1301]: New session 5 of user core. Jul 11 00:32:57.073738 systemd[1]: Started session-5.scope. Jul 11 00:32:57.128769 sudo[1440]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:32:57.128971 sudo[1440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 11 00:32:57.191111 systemd[1]: Starting docker.service... Jul 11 00:32:57.282928 env[1452]: time="2025-07-11T00:32:57.282858878Z" level=info msg="Starting up" Jul 11 00:32:57.284376 env[1452]: time="2025-07-11T00:32:57.284349290Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 11 00:32:57.284376 env[1452]: time="2025-07-11T00:32:57.284370518Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 11 00:32:57.284459 env[1452]: time="2025-07-11T00:32:57.284389662Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 11 00:32:57.284459 env[1452]: time="2025-07-11T00:32:57.284400316Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 11 00:32:57.286396 env[1452]: time="2025-07-11T00:32:57.286374808Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 11 00:32:57.286479 env[1452]: time="2025-07-11T00:32:57.286464713Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 11 00:32:57.286548 env[1452]: time="2025-07-11T00:32:57.286532486Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 11 00:32:57.286605 env[1452]: time="2025-07-11T00:32:57.286592907Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 11 00:32:57.471248 env[1452]: time="2025-07-11T00:32:57.471161027Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 11 00:32:57.471412 env[1452]: time="2025-07-11T00:32:57.471397367Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 11 00:32:57.471607 env[1452]: time="2025-07-11T00:32:57.471589953Z" level=info msg="Loading containers: start." Jul 11 00:32:57.606646 kernel: Initializing XFRM netlink socket Jul 11 00:32:57.629480 env[1452]: time="2025-07-11T00:32:57.629448831Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 11 00:32:57.690988 systemd-networkd[1094]: docker0: Link UP Jul 11 00:32:57.706854 env[1452]: time="2025-07-11T00:32:57.706817097Z" level=info msg="Loading containers: done." Jul 11 00:32:57.728940 env[1452]: time="2025-07-11T00:32:57.728820730Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:32:57.729073 env[1452]: time="2025-07-11T00:32:57.729012255Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 11 00:32:57.729237 env[1452]: time="2025-07-11T00:32:57.729104951Z" level=info msg="Daemon has completed initialization" Jul 11 00:32:57.742253 systemd[1]: Started docker.service. Jul 11 00:32:57.748399 env[1452]: time="2025-07-11T00:32:57.748355334Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:32:58.377174 env[1319]: time="2025-07-11T00:32:58.377132715Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 00:32:58.974466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870998362.mount: Deactivated successfully. Jul 11 00:33:00.191950 env[1319]: time="2025-07-11T00:33:00.191904489Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:00.193841 env[1319]: time="2025-07-11T00:33:00.193809344Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:00.195549 env[1319]: time="2025-07-11T00:33:00.195522515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:00.197777 env[1319]: time="2025-07-11T00:33:00.197747395Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:00.198606 env[1319]: time="2025-07-11T00:33:00.198570116Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 11 00:33:00.201559 env[1319]: time="2025-07-11T00:33:00.201533697Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 00:33:01.560349 env[1319]: time="2025-07-11T00:33:01.560306704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:01.561655 env[1319]: time="2025-07-11T00:33:01.561616710Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:01.563415 env[1319]: time="2025-07-11T00:33:01.563389992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:01.565069 env[1319]: time="2025-07-11T00:33:01.565029914Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:01.566653 env[1319]: time="2025-07-11T00:33:01.566602957Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 11 00:33:01.567222 env[1319]: time="2025-07-11T00:33:01.567191161Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 00:33:02.802122 env[1319]: time="2025-07-11T00:33:02.802070952Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:02.805555 env[1319]: time="2025-07-11T00:33:02.803601596Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:02.805956 env[1319]: time="2025-07-11T00:33:02.805922004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:02.807819 env[1319]: time="2025-07-11T00:33:02.807785879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:02.808859 env[1319]: time="2025-07-11T00:33:02.808820700Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 11 00:33:02.809365 env[1319]: time="2025-07-11T00:33:02.809333491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 00:33:03.823617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974710276.mount: Deactivated successfully. Jul 11 00:33:03.824541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:33:03.824691 systemd[1]: Stopped kubelet.service. Jul 11 00:33:03.826031 systemd[1]: Starting kubelet.service... Jul 11 00:33:03.917438 systemd[1]: Started kubelet.service. Jul 11 00:33:03.966006 kubelet[1589]: E0711 00:33:03.965956 1589 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:33:03.968606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:33:03.968776 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:33:10.374950 env[1319]: time="2025-07-11T00:33:10.374904092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:10.376527 env[1319]: time="2025-07-11T00:33:10.376480534Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:10.378204 env[1319]: time="2025-07-11T00:33:10.378170710Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:10.379864 env[1319]: time="2025-07-11T00:33:10.379840828Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:10.380248 env[1319]: time="2025-07-11T00:33:10.380218797Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 11 00:33:10.380753 env[1319]: time="2025-07-11T00:33:10.380724618Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:33:10.919465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2513402337.mount: Deactivated successfully. Jul 11 00:33:11.797644 env[1319]: time="2025-07-11T00:33:11.797586682Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:11.798997 env[1319]: time="2025-07-11T00:33:11.798968444Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:11.800575 env[1319]: time="2025-07-11T00:33:11.800545646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:11.802421 env[1319]: time="2025-07-11T00:33:11.802394404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:11.803275 env[1319]: time="2025-07-11T00:33:11.803224914Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 11 00:33:11.803860 env[1319]: time="2025-07-11T00:33:11.803819770Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:33:12.241402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967503971.mount: Deactivated successfully. Jul 11 00:33:12.249895 env[1319]: time="2025-07-11T00:33:12.249851911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:12.251158 env[1319]: time="2025-07-11T00:33:12.251123470Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:12.252773 env[1319]: time="2025-07-11T00:33:12.252740904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:12.257307 env[1319]: time="2025-07-11T00:33:12.257277900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:12.257891 env[1319]: time="2025-07-11T00:33:12.257856552Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 11 00:33:12.258863 env[1319]: time="2025-07-11T00:33:12.258834196Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 00:33:12.782164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount453371835.mount: Deactivated successfully. Jul 11 00:33:14.219556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:33:14.219744 systemd[1]: Stopped kubelet.service. Jul 11 00:33:14.221185 systemd[1]: Starting kubelet.service... Jul 11 00:33:14.317914 systemd[1]: Started kubelet.service. Jul 11 00:33:14.350145 kubelet[1605]: E0711 00:33:14.350091 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:33:14.351974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:33:14.352127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:33:14.920650 env[1319]: time="2025-07-11T00:33:14.920594316Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:14.922175 env[1319]: time="2025-07-11T00:33:14.922135889Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:14.924157 env[1319]: time="2025-07-11T00:33:14.924108534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:14.925989 env[1319]: time="2025-07-11T00:33:14.925958598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:14.927031 env[1319]: time="2025-07-11T00:33:14.927002338Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 11 00:33:18.717140 systemd[1]: Stopped kubelet.service. Jul 11 00:33:18.719428 systemd[1]: Starting kubelet.service... Jul 11 00:33:18.746557 systemd[1]: Reloading. Jul 11 00:33:18.793793 /usr/lib/systemd/system-generators/torcx-generator[1663]: time="2025-07-11T00:33:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:33:18.793823 /usr/lib/systemd/system-generators/torcx-generator[1663]: time="2025-07-11T00:33:18Z" level=info msg="torcx already run" Jul 11 00:33:18.883656 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:33:18.883675 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:33:18.898918 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:33:18.961310 systemd[1]: Started kubelet.service. Jul 11 00:33:18.963460 systemd[1]: Stopping kubelet.service... Jul 11 00:33:18.964037 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:33:18.964266 systemd[1]: Stopped kubelet.service. Jul 11 00:33:18.966317 systemd[1]: Starting kubelet.service... Jul 11 00:33:19.055597 systemd[1]: Started kubelet.service. Jul 11 00:33:19.089611 kubelet[1722]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:33:19.089611 kubelet[1722]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:33:19.089611 kubelet[1722]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:33:19.089993 kubelet[1722]: I0711 00:33:19.089685 1722 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:33:19.563122 kubelet[1722]: I0711 00:33:19.563078 1722 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:33:19.563122 kubelet[1722]: I0711 00:33:19.563110 1722 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:33:19.563381 kubelet[1722]: I0711 00:33:19.563350 1722 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:33:19.596942 kubelet[1722]: E0711 00:33:19.596901 1722 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:33:19.599372 kubelet[1722]: I0711 00:33:19.599334 1722 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:33:19.604669 kubelet[1722]: E0711 00:33:19.604641 1722 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:33:19.605067 kubelet[1722]: I0711 00:33:19.605048 1722 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:33:19.608792 kubelet[1722]: I0711 00:33:19.608773 1722 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:33:19.609979 kubelet[1722]: I0711 00:33:19.609958 1722 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:33:19.610227 kubelet[1722]: I0711 00:33:19.610185 1722 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:33:19.610469 kubelet[1722]: I0711 00:33:19.610301 1722 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:33:19.610590 kubelet[1722]: I0711 00:33:19.610577 1722 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:33:19.610672 kubelet[1722]: I0711 00:33:19.610662 1722 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:33:19.611057 kubelet[1722]: I0711 00:33:19.611042 1722 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:33:19.616081 kubelet[1722]: I0711 00:33:19.616046 1722 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:33:19.616081 kubelet[1722]: I0711 00:33:19.616086 1722 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:33:19.616192 kubelet[1722]: I0711 00:33:19.616128 1722 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:33:19.616253 kubelet[1722]: I0711 00:33:19.616228 1722 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:33:19.626129 kubelet[1722]: W0711 00:33:19.626001 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 11 00:33:19.626129 kubelet[1722]: E0711 00:33:19.626071 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:33:19.626262 kubelet[1722]: I0711 00:33:19.626171 1722 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 11 00:33:19.626553 kubelet[1722]: W0711 00:33:19.626519 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 11 00:33:19.626692 kubelet[1722]: E0711 00:33:19.626673 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:33:19.627005 kubelet[1722]: I0711 00:33:19.626991 1722 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:33:19.627194 kubelet[1722]: W0711 00:33:19.627169 1722 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:33:19.628412 kubelet[1722]: I0711 00:33:19.628395 1722 server.go:1274] "Started kubelet" Jul 11 00:33:19.629786 kubelet[1722]: I0711 00:33:19.629753 1722 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:33:19.635211 kubelet[1722]: I0711 00:33:19.635175 1722 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:33:19.636377 kubelet[1722]: I0711 00:33:19.636309 1722 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:33:19.636619 kubelet[1722]: I0711 00:33:19.636595 1722 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:33:19.638808 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 11 00:33:19.639019 kubelet[1722]: I0711 00:33:19.639002 1722 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:33:19.640464 kubelet[1722]: I0711 00:33:19.640439 1722 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:33:19.640570 kubelet[1722]: I0711 00:33:19.640546 1722 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:33:19.640950 kubelet[1722]: E0711 00:33:19.640927 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:33:19.641016 kubelet[1722]: I0711 00:33:19.641004 1722 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:33:19.641083 kubelet[1722]: I0711 00:33:19.641074 1722 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:33:19.642413 kubelet[1722]: E0711 00:33:19.638892 1722 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510b28bd6b7983 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:33:19.628368259 +0000 UTC m=+0.569083071,LastTimestamp:2025-07-11 00:33:19.628368259 +0000 UTC m=+0.569083071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:33:19.644857 kubelet[1722]: W0711 00:33:19.644527 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 11 00:33:19.644857 kubelet[1722]: E0711 00:33:19.644590 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:33:19.648045 kubelet[1722]: E0711 00:33:19.647999 1722 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" Jul 11 00:33:19.648403 kubelet[1722]: I0711 00:33:19.648389 1722 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:33:19.648741 kubelet[1722]: E0711 00:33:19.648715 1722 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:33:19.648909 kubelet[1722]: I0711 00:33:19.648892 1722 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:33:19.650163 kubelet[1722]: I0711 00:33:19.650130 1722 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:33:19.660345 kubelet[1722]: I0711 00:33:19.660309 1722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:33:19.662896 kubelet[1722]: I0711 00:33:19.662873 1722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:33:19.663003 kubelet[1722]: I0711 00:33:19.662993 1722 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:33:19.663419 kubelet[1722]: I0711 00:33:19.663404 1722 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:33:19.663934 kubelet[1722]: E0711 00:33:19.663894 1722 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:33:19.664240 kubelet[1722]: W0711 00:33:19.664038 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 11 00:33:19.664240 kubelet[1722]: E0711 00:33:19.664079 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:33:19.666246 kubelet[1722]: I0711 00:33:19.666223 1722 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:33:19.666387 kubelet[1722]: I0711 00:33:19.666375 1722 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:33:19.666470 kubelet[1722]: I0711 00:33:19.666460 1722 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:33:19.741287 kubelet[1722]: I0711 00:33:19.741251 1722 policy_none.go:49] "None policy: Start" Jul 11 00:33:19.741433 kubelet[1722]: E0711 00:33:19.741261 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:33:19.742079 kubelet[1722]: I0711 00:33:19.742042 1722 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:33:19.747211 kubelet[1722]: I0711 00:33:19.747145 1722 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:33:19.752807 kubelet[1722]: I0711 00:33:19.752778 1722 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:33:19.752946 kubelet[1722]: I0711 00:33:19.752931 1722 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:33:19.752989 kubelet[1722]: I0711 00:33:19.752948 1722 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:33:19.753688 kubelet[1722]: I0711 00:33:19.753657 1722 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:33:19.754124 kubelet[1722]: E0711 00:33:19.754093 1722 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:33:19.851833 kubelet[1722]: E0711 00:33:19.849294 1722 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" Jul 11 00:33:19.859271 kubelet[1722]: I0711 00:33:19.859191 1722 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:33:19.859807 kubelet[1722]: E0711 00:33:19.859782 1722 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 11 00:33:19.942085 kubelet[1722]: I0711 00:33:19.942044 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:33:19.942232 kubelet[1722]: I0711 00:33:19.942111 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e593960f33e41713b416481ccb04f73-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e593960f33e41713b416481ccb04f73\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:33:19.942232 kubelet[1722]: I0711 00:33:19.942137 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:33:19.942232 kubelet[1722]: I0711 00:33:19.942154 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:33:19.942232 kubelet[1722]: I0711 00:33:19.942172 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e593960f33e41713b416481ccb04f73-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e593960f33e41713b416481ccb04f73\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:33:19.942232 kubelet[1722]: I0711 00:33:19.942196 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e593960f33e41713b416481ccb04f73-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e593960f33e41713b416481ccb04f73\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:33:19.942341 kubelet[1722]: I0711 00:33:19.942213 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:33:19.942341 kubelet[1722]: I0711 00:33:19.942230 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:33:19.942341 kubelet[1722]: I0711 00:33:19.942247 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:33:20.061680 kubelet[1722]: I0711 00:33:20.061616 1722 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:33:20.062199 kubelet[1722]: E0711 00:33:20.062173 1722 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 11 00:33:20.069384 kubelet[1722]: E0711 00:33:20.069363 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:20.070330 kubelet[1722]: E0711 00:33:20.069806 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:20.070395 env[1319]: time="2025-07-11T00:33:20.070075608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e593960f33e41713b416481ccb04f73,Namespace:kube-system,Attempt:0,}" Jul 11 00:33:20.070911 env[1319]: time="2025-07-11T00:33:20.070868814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 00:33:20.072180 kubelet[1722]: E0711 00:33:20.072049 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:20.072682 env[1319]: time="2025-07-11T00:33:20.072656066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 00:33:20.251768 kubelet[1722]: E0711 00:33:20.250754 1722 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" Jul 11 00:33:20.464523 kubelet[1722]: I0711 00:33:20.463790 1722 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:33:20.464523 kubelet[1722]: E0711 00:33:20.464135 1722 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 11 00:33:20.577344 kubelet[1722]: W0711 00:33:20.577013 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 11 00:33:20.577344 kubelet[1722]: E0711 00:33:20.577088 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:33:20.634407 kubelet[1722]: W0711 00:33:20.634332 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 11 00:33:20.634407 kubelet[1722]: E0711 00:33:20.634388 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:33:20.650842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111444208.mount: Deactivated successfully. Jul 11 00:33:20.654545 env[1319]: time="2025-07-11T00:33:20.654482679Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.658048 env[1319]: time="2025-07-11T00:33:20.657970893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.662878 env[1319]: time="2025-07-11T00:33:20.662844120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.664075 env[1319]: time="2025-07-11T00:33:20.664044521Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.665785 env[1319]: time="2025-07-11T00:33:20.665750238Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.668114 env[1319]: time="2025-07-11T00:33:20.668084054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.671491 env[1319]: time="2025-07-11T00:33:20.671454761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.674066 env[1319]: time="2025-07-11T00:33:20.674017914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.675609 env[1319]: time="2025-07-11T00:33:20.675578267Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.676774 env[1319]: time="2025-07-11T00:33:20.676745654Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.677797 env[1319]: time="2025-07-11T00:33:20.677770875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.678579 env[1319]: time="2025-07-11T00:33:20.678548934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:20.709273 env[1319]: time="2025-07-11T00:33:20.709195332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:33:20.709273 env[1319]: time="2025-07-11T00:33:20.709237219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:33:20.709273 env[1319]: time="2025-07-11T00:33:20.709256204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:33:20.710340 env[1319]: time="2025-07-11T00:33:20.709532743Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b886d3e89d73e0295185ef3c952150c5b3c504978819c4d2fe780fd19a81050 pid=1772 runtime=io.containerd.runc.v2 Jul 11 00:33:20.712223 env[1319]: time="2025-07-11T00:33:20.710844255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:33:20.712223 env[1319]: time="2025-07-11T00:33:20.710907764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:33:20.712223 env[1319]: time="2025-07-11T00:33:20.710934983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:33:20.712223 env[1319]: time="2025-07-11T00:33:20.711223072Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ac0b521d5f7b995e3dabcd69d11fa4dbcc3cf8e09104607759ae7281893d9aa pid=1785 runtime=io.containerd.runc.v2 Jul 11 00:33:20.712223 env[1319]: time="2025-07-11T00:33:20.711688261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:33:20.712223 env[1319]: time="2025-07-11T00:33:20.711721554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:33:20.712223 env[1319]: time="2025-07-11T00:33:20.711733665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:33:20.712223 env[1319]: time="2025-07-11T00:33:20.711838061Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c5539067be17a3aa8e823aad37385d05d23684135acef3578da8ae89d67ad25 pid=1787 runtime=io.containerd.runc.v2 Jul 11 00:33:20.803861 env[1319]: time="2025-07-11T00:33:20.803817864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c5539067be17a3aa8e823aad37385d05d23684135acef3578da8ae89d67ad25\"" Jul 11 00:33:20.805529 kubelet[1722]: E0711 00:33:20.805490 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:20.806076 env[1319]: time="2025-07-11T00:33:20.806039050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ac0b521d5f7b995e3dabcd69d11fa4dbcc3cf8e09104607759ae7281893d9aa\"" Jul 11 00:33:20.807719 env[1319]: time="2025-07-11T00:33:20.807676262Z" level=info msg="CreateContainer within sandbox \"2c5539067be17a3aa8e823aad37385d05d23684135acef3578da8ae89d67ad25\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:33:20.807813 kubelet[1722]: E0711 00:33:20.807736 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:20.810740 env[1319]: time="2025-07-11T00:33:20.810616153Z" level=info msg="CreateContainer within sandbox \"8ac0b521d5f7b995e3dabcd69d11fa4dbcc3cf8e09104607759ae7281893d9aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:33:20.818228 env[1319]: time="2025-07-11T00:33:20.818171278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e593960f33e41713b416481ccb04f73,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b886d3e89d73e0295185ef3c952150c5b3c504978819c4d2fe780fd19a81050\"" Jul 11 00:33:20.818758 kubelet[1722]: E0711 00:33:20.818727 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:20.820029 env[1319]: time="2025-07-11T00:33:20.819994821Z" level=info msg="CreateContainer within sandbox \"2b886d3e89d73e0295185ef3c952150c5b3c504978819c4d2fe780fd19a81050\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:33:20.821451 env[1319]: time="2025-07-11T00:33:20.821416965Z" level=info msg="CreateContainer within sandbox \"2c5539067be17a3aa8e823aad37385d05d23684135acef3578da8ae89d67ad25\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e067d6839afd19cc62c3d8d2e82e5385b3d329d63e9f38ed924124b56bf8876d\"" Jul 11 00:33:20.822005 env[1319]: time="2025-07-11T00:33:20.821973121Z" level=info msg="StartContainer for \"e067d6839afd19cc62c3d8d2e82e5385b3d329d63e9f38ed924124b56bf8876d\"" Jul 11 00:33:20.827170 env[1319]: time="2025-07-11T00:33:20.827124086Z" level=info msg="CreateContainer within sandbox \"8ac0b521d5f7b995e3dabcd69d11fa4dbcc3cf8e09104607759ae7281893d9aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d59bcab81bd119b2d110983c8fb8489b52af3c299cb3caafdab958be129d82b0\"" Jul 11 00:33:20.828948 env[1319]: time="2025-07-11T00:33:20.827565333Z" level=info msg="StartContainer for \"d59bcab81bd119b2d110983c8fb8489b52af3c299cb3caafdab958be129d82b0\"" Jul 11 00:33:20.837278 env[1319]: time="2025-07-11T00:33:20.837232771Z" level=info msg="CreateContainer within sandbox \"2b886d3e89d73e0295185ef3c952150c5b3c504978819c4d2fe780fd19a81050\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"137caf6497dd9da4c3337106e3d17bb1b675f6056d435094c4c73e8c03a62a09\"" Jul 11 00:33:20.837906 env[1319]: time="2025-07-11T00:33:20.837809350Z" level=info msg="StartContainer for \"137caf6497dd9da4c3337106e3d17bb1b675f6056d435094c4c73e8c03a62a09\"" Jul 11 00:33:20.954669 env[1319]: time="2025-07-11T00:33:20.954252490Z" level=info msg="StartContainer for \"d59bcab81bd119b2d110983c8fb8489b52af3c299cb3caafdab958be129d82b0\" returns successfully" Jul 11 00:33:20.955081 env[1319]: time="2025-07-11T00:33:20.954949614Z" level=info msg="StartContainer for \"137caf6497dd9da4c3337106e3d17bb1b675f6056d435094c4c73e8c03a62a09\" returns successfully" Jul 11 00:33:20.981472 env[1319]: time="2025-07-11T00:33:20.977290647Z" level=info msg="StartContainer for \"e067d6839afd19cc62c3d8d2e82e5385b3d329d63e9f38ed924124b56bf8876d\" returns successfully" Jul 11 00:33:21.052269 kubelet[1722]: E0711 00:33:21.052218 1722 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="1.6s" Jul 11 00:33:21.097881 kubelet[1722]: W0711 00:33:21.097751 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 11 00:33:21.097881 kubelet[1722]: E0711 00:33:21.097819 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:33:21.265624 kubelet[1722]: I0711 00:33:21.265591 1722 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:33:21.670940 kubelet[1722]: E0711 00:33:21.670911 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:21.672441 kubelet[1722]: E0711 00:33:21.672414 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:21.674403 kubelet[1722]: E0711 00:33:21.674381 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:22.675983 kubelet[1722]: E0711 00:33:22.675953 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:23.166793 kubelet[1722]: E0711 00:33:23.166700 1722 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:33:23.259124 kubelet[1722]: I0711 00:33:23.259081 1722 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:33:23.259124 kubelet[1722]: E0711 00:33:23.259125 1722 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:33:23.267562 kubelet[1722]: E0711 00:33:23.267518 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:33:23.367950 kubelet[1722]: E0711 00:33:23.367918 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:33:23.468611 kubelet[1722]: E0711 00:33:23.468478 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:33:23.569554 kubelet[1722]: E0711 00:33:23.569515 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:33:23.670726 kubelet[1722]: E0711 00:33:23.670679 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:33:23.771674 kubelet[1722]: E0711 00:33:23.771543 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:33:23.872656 kubelet[1722]: E0711 00:33:23.872559 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:33:23.896212 kubelet[1722]: E0711 00:33:23.896175 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:24.128309 kubelet[1722]: E0711 00:33:24.128212 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:24.364219 kubelet[1722]: E0711 00:33:24.364184 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:24.628385 kubelet[1722]: I0711 00:33:24.628305 1722 apiserver.go:52] "Watching apiserver" Jul 11 00:33:24.641760 kubelet[1722]: I0711 00:33:24.641726 1722 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:33:24.677193 kubelet[1722]: E0711 00:33:24.677167 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:24.677340 kubelet[1722]: E0711 00:33:24.677211 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:25.033545 systemd[1]: Reloading. Jul 11 00:33:25.080814 /usr/lib/systemd/system-generators/torcx-generator[2019]: time="2025-07-11T00:33:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:33:25.080845 /usr/lib/systemd/system-generators/torcx-generator[2019]: time="2025-07-11T00:33:25Z" level=info msg="torcx already run" Jul 11 00:33:25.144219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:33:25.144239 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:33:25.159748 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:33:25.226990 systemd[1]: Stopping kubelet.service... Jul 11 00:33:25.250285 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:33:25.250585 systemd[1]: Stopped kubelet.service. Jul 11 00:33:25.252428 systemd[1]: Starting kubelet.service... Jul 11 00:33:25.340884 systemd[1]: Started kubelet.service. Jul 11 00:33:25.376546 kubelet[2072]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:33:25.376546 kubelet[2072]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:33:25.376546 kubelet[2072]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:33:25.377016 kubelet[2072]: I0711 00:33:25.376591 2072 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:33:25.383549 kubelet[2072]: I0711 00:33:25.383513 2072 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:33:25.383549 kubelet[2072]: I0711 00:33:25.383543 2072 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:33:25.384215 kubelet[2072]: I0711 00:33:25.384074 2072 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:33:25.385637 kubelet[2072]: I0711 00:33:25.385614 2072 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:33:25.387524 kubelet[2072]: I0711 00:33:25.387492 2072 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:33:25.390290 kubelet[2072]: E0711 00:33:25.390269 2072 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:33:25.390290 kubelet[2072]: I0711 00:33:25.390291 2072 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:33:25.392513 kubelet[2072]: I0711 00:33:25.392485 2072 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:33:25.392832 kubelet[2072]: I0711 00:33:25.392814 2072 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:33:25.392939 kubelet[2072]: I0711 00:33:25.392909 2072 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:33:25.393092 kubelet[2072]: I0711 00:33:25.392933 2072 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:33:25.393170 kubelet[2072]: I0711 00:33:25.393093 2072 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:33:25.393170 kubelet[2072]: I0711 00:33:25.393102 2072 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:33:25.393170 kubelet[2072]: I0711 00:33:25.393133 2072 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:33:25.393247 kubelet[2072]: I0711 00:33:25.393213 2072 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:33:25.393247 kubelet[2072]: I0711 00:33:25.393224 2072 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:33:25.393247 kubelet[2072]: I0711 00:33:25.393239 2072 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:33:25.393305 kubelet[2072]: I0711 00:33:25.393251 2072 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:33:25.393683 kubelet[2072]: I0711 00:33:25.393660 2072 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 11 00:33:25.394485 kubelet[2072]: I0711 00:33:25.394454 2072 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:33:25.395721 kubelet[2072]: I0711 00:33:25.395697 2072 server.go:1274] "Started kubelet" Jul 11 00:33:25.397478 kubelet[2072]: I0711 00:33:25.397451 2072 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:33:25.398975 kubelet[2072]: I0711 00:33:25.398546 2072 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:33:25.398975 kubelet[2072]: I0711 00:33:25.398896 2072 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:33:25.399363 kubelet[2072]: I0711 00:33:25.399342 2072 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:33:25.399505 kubelet[2072]: I0711 00:33:25.399488 2072 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:33:25.399671 kubelet[2072]: E0711 00:33:25.399649 2072 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:33:25.399833 kubelet[2072]: I0711 00:33:25.399817 2072 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:33:25.399978 kubelet[2072]: I0711 00:33:25.399951 2072 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:33:25.400091 kubelet[2072]: I0711 00:33:25.400070 2072 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:33:25.402006 kubelet[2072]: I0711 00:33:25.401982 2072 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:33:25.428167 kubelet[2072]: I0711 00:33:25.428139 2072 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:33:25.428292 kubelet[2072]: I0711 00:33:25.428281 2072 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:33:25.429869 kubelet[2072]: I0711 00:33:25.429837 2072 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:33:25.434877 kubelet[2072]: E0711 00:33:25.434473 2072 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:33:25.448063 kubelet[2072]: I0711 00:33:25.448011 2072 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:33:25.450626 kubelet[2072]: I0711 00:33:25.450599 2072 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:33:25.450626 kubelet[2072]: I0711 00:33:25.450627 2072 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:33:25.450755 kubelet[2072]: I0711 00:33:25.450657 2072 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:33:25.450755 kubelet[2072]: E0711 00:33:25.450702 2072 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:33:25.485713 kubelet[2072]: I0711 00:33:25.485683 2072 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:33:25.485919 kubelet[2072]: I0711 00:33:25.485904 2072 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:33:25.486001 kubelet[2072]: I0711 00:33:25.485993 2072 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:33:25.486263 kubelet[2072]: I0711 00:33:25.486247 2072 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:33:25.486388 kubelet[2072]: I0711 00:33:25.486340 2072 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:33:25.486478 kubelet[2072]: I0711 00:33:25.486468 2072 policy_none.go:49] "None policy: Start" Jul 11 00:33:25.487320 kubelet[2072]: I0711 00:33:25.487305 2072 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:33:25.487424 kubelet[2072]: I0711 00:33:25.487413 2072 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:33:25.487678 kubelet[2072]: I0711 00:33:25.487666 2072 state_mem.go:75] "Updated machine memory state" Jul 11 00:33:25.489493 kubelet[2072]: I0711 00:33:25.489474 2072 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:33:25.490236 kubelet[2072]: I0711 00:33:25.490127 2072 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:33:25.490444 kubelet[2072]: I0711 00:33:25.490402 2072 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:33:25.491149 kubelet[2072]: I0711 00:33:25.491134 2072 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:33:25.557395 kubelet[2072]: E0711 00:33:25.557334 2072 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:33:25.557395 kubelet[2072]: E0711 00:33:25.557396 2072 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:33:25.596041 kubelet[2072]: I0711 00:33:25.594442 2072 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:33:25.603483 kubelet[2072]: I0711 00:33:25.603453 2072 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 00:33:25.603603 kubelet[2072]: I0711 00:33:25.603527 2072 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:33:25.700889 kubelet[2072]: I0711 00:33:25.700836 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:33:25.700889 kubelet[2072]: I0711 00:33:25.700882 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:33:25.701036 kubelet[2072]: I0711 00:33:25.700903 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e593960f33e41713b416481ccb04f73-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e593960f33e41713b416481ccb04f73\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:33:25.701036 kubelet[2072]: I0711 00:33:25.700963 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:33:25.701036 kubelet[2072]: I0711 00:33:25.700981 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e593960f33e41713b416481ccb04f73-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e593960f33e41713b416481ccb04f73\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:33:25.701036 kubelet[2072]: I0711 00:33:25.700996 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e593960f33e41713b416481ccb04f73-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e593960f33e41713b416481ccb04f73\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:33:25.701036 kubelet[2072]: I0711 00:33:25.701016 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:33:25.701166 kubelet[2072]: I0711 00:33:25.701031 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:33:25.701166 kubelet[2072]: I0711 00:33:25.701058 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:33:25.858972 kubelet[2072]: E0711 00:33:25.858533 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:25.858972 kubelet[2072]: E0711 00:33:25.858585 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:25.858972 kubelet[2072]: E0711 00:33:25.858731 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:26.092874 sudo[2108]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 11 00:33:26.093406 sudo[2108]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 11 00:33:26.395400 kubelet[2072]: I0711 00:33:26.395355 2072 apiserver.go:52] "Watching apiserver" Jul 11 00:33:26.400425 kubelet[2072]: I0711 00:33:26.400376 2072 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:33:26.470028 kubelet[2072]: E0711 00:33:26.469973 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:26.472921 kubelet[2072]: E0711 00:33:26.472896 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:26.475062 kubelet[2072]: E0711 00:33:26.475034 2072 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:33:26.475305 kubelet[2072]: E0711 00:33:26.475288 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:26.498043 kubelet[2072]: I0711 00:33:26.497990 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.497958556 podStartE2EDuration="1.497958556s" podCreationTimestamp="2025-07-11 00:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:33:26.495649944 +0000 UTC m=+1.150966780" watchObservedRunningTime="2025-07-11 00:33:26.497958556 +0000 UTC m=+1.153275392" Jul 11 00:33:26.509394 kubelet[2072]: I0711 00:33:26.509349 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.509321841 podStartE2EDuration="2.509321841s" podCreationTimestamp="2025-07-11 00:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:33:26.509161346 +0000 UTC m=+1.164478182" watchObservedRunningTime="2025-07-11 00:33:26.509321841 +0000 UTC m=+1.164638637" Jul 11 00:33:26.509685 kubelet[2072]: I0711 00:33:26.509658 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.509622908 podStartE2EDuration="2.509622908s" podCreationTimestamp="2025-07-11 00:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:33:26.502367121 +0000 UTC m=+1.157683957" watchObservedRunningTime="2025-07-11 00:33:26.509622908 +0000 UTC m=+1.164939744" Jul 11 00:33:26.585004 sudo[2108]: pam_unix(sudo:session): session closed for user root Jul 11 00:33:27.469805 kubelet[2072]: E0711 00:33:27.469775 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:27.470156 kubelet[2072]: E0711 00:33:27.469771 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:27.969871 kubelet[2072]: E0711 00:33:27.969778 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:28.471142 kubelet[2072]: E0711 00:33:28.471027 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:28.499030 sudo[1440]: pam_unix(sudo:session): session closed for user root Jul 11 00:33:28.500416 sshd[1434]: pam_unix(sshd:session): session closed for user core Jul 11 00:33:28.502846 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:50472.service: Deactivated successfully. Jul 11 00:33:28.503914 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:33:28.503919 systemd-logind[1301]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:33:28.504871 systemd-logind[1301]: Removed session 5. Jul 11 00:33:30.206533 kubelet[2072]: I0711 00:33:30.206496 2072 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:33:30.206925 env[1319]: time="2025-07-11T00:33:30.206834975Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:33:30.207104 kubelet[2072]: I0711 00:33:30.206995 2072 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:33:31.238782 kubelet[2072]: I0711 00:33:31.238736 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-host-proc-sys-kernel\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.238782 kubelet[2072]: I0711 00:33:31.238779 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-hostproc\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239137 kubelet[2072]: I0711 00:33:31.238798 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-etc-cni-netd\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239137 kubelet[2072]: I0711 00:33:31.238813 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9549144-1825-4910-8356-eefc67f31af4-lib-modules\") pod \"kube-proxy-5wj76\" (UID: \"d9549144-1825-4910-8356-eefc67f31af4\") " pod="kube-system/kube-proxy-5wj76" Jul 11 00:33:31.239137 kubelet[2072]: I0711 00:33:31.238827 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-run\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239137 kubelet[2072]: I0711 00:33:31.238844 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-xtables-lock\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239137 kubelet[2072]: I0711 00:33:31.238860 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cni-path\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239137 kubelet[2072]: I0711 00:33:31.238875 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2918b70f-208d-44aa-96fa-7bf11f462149-clustermesh-secrets\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239299 kubelet[2072]: I0711 00:33:31.238891 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-bpf-maps\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239299 kubelet[2072]: I0711 00:33:31.238911 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx4z4\" (UniqueName: \"kubernetes.io/projected/7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a-kube-api-access-nx4z4\") pod \"cilium-operator-5d85765b45-6t6p6\" (UID: \"7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a\") " pod="kube-system/cilium-operator-5d85765b45-6t6p6" Jul 11 00:33:31.239299 kubelet[2072]: I0711 00:33:31.238928 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl2jj\" (UniqueName: \"kubernetes.io/projected/2918b70f-208d-44aa-96fa-7bf11f462149-kube-api-access-bl2jj\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239299 kubelet[2072]: I0711 00:33:31.238944 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d9549144-1825-4910-8356-eefc67f31af4-kube-proxy\") pod \"kube-proxy-5wj76\" (UID: \"d9549144-1825-4910-8356-eefc67f31af4\") " pod="kube-system/kube-proxy-5wj76" Jul 11 00:33:31.239299 kubelet[2072]: I0711 00:33:31.238957 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-cgroup\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239413 kubelet[2072]: I0711 00:33:31.238974 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-host-proc-sys-net\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239413 kubelet[2072]: I0711 00:33:31.238989 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a-cilium-config-path\") pod \"cilium-operator-5d85765b45-6t6p6\" (UID: \"7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a\") " pod="kube-system/cilium-operator-5d85765b45-6t6p6" Jul 11 00:33:31.239413 kubelet[2072]: I0711 00:33:31.239004 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9549144-1825-4910-8356-eefc67f31af4-xtables-lock\") pod \"kube-proxy-5wj76\" (UID: \"d9549144-1825-4910-8356-eefc67f31af4\") " pod="kube-system/kube-proxy-5wj76" Jul 11 00:33:31.239413 kubelet[2072]: I0711 00:33:31.239019 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd9qf\" (UniqueName: \"kubernetes.io/projected/d9549144-1825-4910-8356-eefc67f31af4-kube-api-access-bd9qf\") pod \"kube-proxy-5wj76\" (UID: \"d9549144-1825-4910-8356-eefc67f31af4\") " pod="kube-system/kube-proxy-5wj76" Jul 11 00:33:31.239413 kubelet[2072]: I0711 00:33:31.239036 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-lib-modules\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239519 kubelet[2072]: I0711 00:33:31.239052 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-config-path\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.239519 kubelet[2072]: I0711 00:33:31.239065 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2918b70f-208d-44aa-96fa-7bf11f462149-hubble-tls\") pod \"cilium-rpcgb\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " pod="kube-system/cilium-rpcgb" Jul 11 00:33:31.340153 kubelet[2072]: I0711 00:33:31.340109 2072 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 11 00:33:31.450478 kubelet[2072]: E0711 00:33:31.450447 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:31.451397 env[1319]: time="2025-07-11T00:33:31.451290529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5wj76,Uid:d9549144-1825-4910-8356-eefc67f31af4,Namespace:kube-system,Attempt:0,}" Jul 11 00:33:31.466284 env[1319]: time="2025-07-11T00:33:31.466212687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:33:31.466284 env[1319]: time="2025-07-11T00:33:31.466258250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:33:31.466284 env[1319]: time="2025-07-11T00:33:31.466268891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:33:31.466450 env[1319]: time="2025-07-11T00:33:31.466388379Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/707b717336843e2c09e3e92a6e4e6e25a6a7c868472acb73f38cb391cbfb7db2 pid=2170 runtime=io.containerd.runc.v2 Jul 11 00:33:31.482389 kubelet[2072]: E0711 00:33:31.482361 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:31.484145 env[1319]: time="2025-07-11T00:33:31.483824632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rpcgb,Uid:2918b70f-208d-44aa-96fa-7bf11f462149,Namespace:kube-system,Attempt:0,}" Jul 11 00:33:31.489916 kubelet[2072]: E0711 00:33:31.489835 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:31.492364 env[1319]: time="2025-07-11T00:33:31.492328503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6t6p6,Uid:7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a,Namespace:kube-system,Attempt:0,}" Jul 11 00:33:31.510246 env[1319]: time="2025-07-11T00:33:31.510168304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:33:31.510246 env[1319]: time="2025-07-11T00:33:31.510209227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:33:31.510246 env[1319]: time="2025-07-11T00:33:31.510219187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:33:31.510456 env[1319]: time="2025-07-11T00:33:31.510414641Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab pid=2204 runtime=io.containerd.runc.v2 Jul 11 00:33:31.513701 env[1319]: time="2025-07-11T00:33:31.512439902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:33:31.513701 env[1319]: time="2025-07-11T00:33:31.512475584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:33:31.513701 env[1319]: time="2025-07-11T00:33:31.512485345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:33:31.513701 env[1319]: time="2025-07-11T00:33:31.512669718Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a65e435b4aa7bfc6743e7acac09b396225e0811e4362030142fe6d06f88686c2 pid=2219 runtime=io.containerd.runc.v2 Jul 11 00:33:31.519651 env[1319]: time="2025-07-11T00:33:31.519591359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5wj76,Uid:d9549144-1825-4910-8356-eefc67f31af4,Namespace:kube-system,Attempt:0,} returns sandbox id \"707b717336843e2c09e3e92a6e4e6e25a6a7c868472acb73f38cb391cbfb7db2\"" Jul 11 00:33:31.520771 kubelet[2072]: E0711 00:33:31.520742 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:31.528118 env[1319]: time="2025-07-11T00:33:31.527991943Z" level=info msg="CreateContainer within sandbox \"707b717336843e2c09e3e92a6e4e6e25a6a7c868472acb73f38cb391cbfb7db2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:33:31.555723 env[1319]: time="2025-07-11T00:33:31.555669108Z" level=info msg="CreateContainer within sandbox \"707b717336843e2c09e3e92a6e4e6e25a6a7c868472acb73f38cb391cbfb7db2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"15c2380ef5543a42a214bad552ae5362af634ab95bffbdc09466ae9c40f5cd6a\"" Jul 11 00:33:31.558494 env[1319]: time="2025-07-11T00:33:31.558457382Z" level=info msg="StartContainer for \"15c2380ef5543a42a214bad552ae5362af634ab95bffbdc09466ae9c40f5cd6a\"" Jul 11 00:33:31.571189 env[1319]: time="2025-07-11T00:33:31.571149665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rpcgb,Uid:2918b70f-208d-44aa-96fa-7bf11f462149,Namespace:kube-system,Attempt:0,} returns sandbox id \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\"" Jul 11 00:33:31.572320 kubelet[2072]: E0711 00:33:31.572289 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:31.573493 env[1319]: time="2025-07-11T00:33:31.573458626Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 00:33:31.586855 env[1319]: time="2025-07-11T00:33:31.586395165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6t6p6,Uid:7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a65e435b4aa7bfc6743e7acac09b396225e0811e4362030142fe6d06f88686c2\"" Jul 11 00:33:31.587839 kubelet[2072]: E0711 00:33:31.587807 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:31.629668 env[1319]: time="2025-07-11T00:33:31.627645874Z" level=info msg="StartContainer for \"15c2380ef5543a42a214bad552ae5362af634ab95bffbdc09466ae9c40f5cd6a\" returns successfully" Jul 11 00:33:32.478994 kubelet[2072]: E0711 00:33:32.478301 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:32.488478 kubelet[2072]: I0711 00:33:32.488415 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5wj76" podStartSLOduration=1.488397268 podStartE2EDuration="1.488397268s" podCreationTimestamp="2025-07-11 00:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:33:32.487192988 +0000 UTC m=+7.142509824" watchObservedRunningTime="2025-07-11 00:33:32.488397268 +0000 UTC m=+7.143714104" Jul 11 00:33:36.828801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount216271410.mount: Deactivated successfully. Jul 11 00:33:36.961148 update_engine[1302]: I0711 00:33:36.960767 1302 update_attempter.cc:509] Updating boot flags... Jul 11 00:33:37.079064 kubelet[2072]: E0711 00:33:37.078666 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:37.240242 kubelet[2072]: E0711 00:33:37.240143 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:37.979779 kubelet[2072]: E0711 00:33:37.979744 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:39.202255 env[1319]: time="2025-07-11T00:33:39.202211828Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:39.203749 env[1319]: time="2025-07-11T00:33:39.203718017Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:39.206029 env[1319]: time="2025-07-11T00:33:39.206000562Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:39.206706 env[1319]: time="2025-07-11T00:33:39.206667672Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 11 00:33:39.209062 env[1319]: time="2025-07-11T00:33:39.209028260Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 00:33:39.217369 env[1319]: time="2025-07-11T00:33:39.217337561Z" level=info msg="CreateContainer within sandbox \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:33:39.229449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722972813.mount: Deactivated successfully. Jul 11 00:33:39.234617 env[1319]: time="2025-07-11T00:33:39.234569352Z" level=info msg="CreateContainer within sandbox \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7\"" Jul 11 00:33:39.235098 env[1319]: time="2025-07-11T00:33:39.235069215Z" level=info msg="StartContainer for \"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7\"" Jul 11 00:33:39.348011 env[1319]: time="2025-07-11T00:33:39.347972793Z" level=info msg="StartContainer for \"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7\" returns successfully" Jul 11 00:33:39.363181 env[1319]: time="2025-07-11T00:33:39.363133208Z" level=info msg="shim disconnected" id=704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7 Jul 11 00:33:39.363462 env[1319]: time="2025-07-11T00:33:39.363434262Z" level=warning msg="cleaning up after shim disconnected" id=704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7 namespace=k8s.io Jul 11 00:33:39.363534 env[1319]: time="2025-07-11T00:33:39.363520986Z" level=info msg="cleaning up dead shim" Jul 11 00:33:39.370187 env[1319]: time="2025-07-11T00:33:39.370154970Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:33:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2514 runtime=io.containerd.runc.v2\n" Jul 11 00:33:39.496601 kubelet[2072]: E0711 00:33:39.496057 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:39.501280 env[1319]: time="2025-07-11T00:33:39.501243582Z" level=info msg="CreateContainer within sandbox \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:33:39.529882 env[1319]: time="2025-07-11T00:33:39.529836534Z" level=info msg="CreateContainer within sandbox \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1\"" Jul 11 00:33:39.531558 env[1319]: time="2025-07-11T00:33:39.531523691Z" level=info msg="StartContainer for \"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1\"" Jul 11 00:33:39.585339 env[1319]: time="2025-07-11T00:33:39.585289197Z" level=info msg="StartContainer for \"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1\" returns successfully" Jul 11 00:33:39.596973 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:33:39.597304 systemd[1]: Stopped systemd-sysctl.service. Jul 11 00:33:39.597475 systemd[1]: Stopping systemd-sysctl.service... Jul 11 00:33:39.599058 systemd[1]: Starting systemd-sysctl.service... Jul 11 00:33:39.606738 systemd[1]: Finished systemd-sysctl.service. Jul 11 00:33:39.621525 env[1319]: time="2025-07-11T00:33:39.621482297Z" level=info msg="shim disconnected" id=c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1 Jul 11 00:33:39.621525 env[1319]: time="2025-07-11T00:33:39.621524539Z" level=warning msg="cleaning up after shim disconnected" id=c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1 namespace=k8s.io Jul 11 00:33:39.621717 env[1319]: time="2025-07-11T00:33:39.621533899Z" level=info msg="cleaning up dead shim" Jul 11 00:33:39.628385 env[1319]: time="2025-07-11T00:33:39.628340931Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:33:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2579 runtime=io.containerd.runc.v2\n" Jul 11 00:33:40.227823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7-rootfs.mount: Deactivated successfully. Jul 11 00:33:40.292418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515938433.mount: Deactivated successfully. Jul 11 00:33:40.500129 kubelet[2072]: E0711 00:33:40.499816 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:40.501973 env[1319]: time="2025-07-11T00:33:40.501921500Z" level=info msg="CreateContainer within sandbox \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:33:40.514188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount587204230.mount: Deactivated successfully. Jul 11 00:33:40.521737 env[1319]: time="2025-07-11T00:33:40.521683884Z" level=info msg="CreateContainer within sandbox \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00\"" Jul 11 00:33:40.522140 env[1319]: time="2025-07-11T00:33:40.522117102Z" level=info msg="StartContainer for \"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00\"" Jul 11 00:33:40.625865 env[1319]: time="2025-07-11T00:33:40.625773189Z" level=info msg="StartContainer for \"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00\" returns successfully" Jul 11 00:33:40.662409 env[1319]: time="2025-07-11T00:33:40.662355387Z" level=info msg="shim disconnected" id=855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00 Jul 11 00:33:40.662409 env[1319]: time="2025-07-11T00:33:40.662406029Z" level=warning msg="cleaning up after shim disconnected" id=855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00 namespace=k8s.io Jul 11 00:33:40.662409 env[1319]: time="2025-07-11T00:33:40.662416430Z" level=info msg="cleaning up dead shim" Jul 11 00:33:40.669436 env[1319]: time="2025-07-11T00:33:40.669395054Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:33:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2637 runtime=io.containerd.runc.v2\n" Jul 11 00:33:40.785350 env[1319]: time="2025-07-11T00:33:40.784894299Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:40.786261 env[1319]: time="2025-07-11T00:33:40.786233077Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:40.787644 env[1319]: time="2025-07-11T00:33:40.787596097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:33:40.788135 env[1319]: time="2025-07-11T00:33:40.788105439Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 11 00:33:40.790487 env[1319]: time="2025-07-11T00:33:40.790459222Z" level=info msg="CreateContainer within sandbox \"a65e435b4aa7bfc6743e7acac09b396225e0811e4362030142fe6d06f88686c2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 00:33:40.797790 env[1319]: time="2025-07-11T00:33:40.797744500Z" level=info msg="CreateContainer within sandbox \"a65e435b4aa7bfc6743e7acac09b396225e0811e4362030142fe6d06f88686c2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\"" Jul 11 00:33:40.798380 env[1319]: time="2025-07-11T00:33:40.798348726Z" level=info msg="StartContainer for \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\"" Jul 11 00:33:40.859112 env[1319]: time="2025-07-11T00:33:40.857740880Z" level=info msg="StartContainer for \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\" returns successfully" Jul 11 00:33:41.502347 kubelet[2072]: E0711 00:33:41.502310 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:41.505308 kubelet[2072]: E0711 00:33:41.505274 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:41.506928 env[1319]: time="2025-07-11T00:33:41.506882752Z" level=info msg="CreateContainer within sandbox \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:33:41.524106 kubelet[2072]: I0711 00:33:41.521949 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6t6p6" podStartSLOduration=1.3232973829999999 podStartE2EDuration="10.521932818s" podCreationTimestamp="2025-07-11 00:33:31 +0000 UTC" firstStartedPulling="2025-07-11 00:33:31.590395324 +0000 UTC m=+6.245712160" lastFinishedPulling="2025-07-11 00:33:40.789030759 +0000 UTC m=+15.444347595" observedRunningTime="2025-07-11 00:33:41.5200545 +0000 UTC m=+16.175371336" watchObservedRunningTime="2025-07-11 00:33:41.521932818 +0000 UTC m=+16.177249654" Jul 11 00:33:41.523044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988655035.mount: Deactivated successfully. Jul 11 00:33:41.530137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199696754.mount: Deactivated successfully. Jul 11 00:33:41.536744 env[1319]: time="2025-07-11T00:33:41.536686272Z" level=info msg="CreateContainer within sandbox \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca\"" Jul 11 00:33:41.537514 env[1319]: time="2025-07-11T00:33:41.537463705Z" level=info msg="StartContainer for \"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca\"" Jul 11 00:33:41.638501 env[1319]: time="2025-07-11T00:33:41.638460308Z" level=info msg="StartContainer for \"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca\" returns successfully" Jul 11 00:33:41.653041 env[1319]: time="2025-07-11T00:33:41.652995313Z" level=info msg="shim disconnected" id=c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca Jul 11 00:33:41.653223 env[1319]: time="2025-07-11T00:33:41.653040555Z" level=warning msg="cleaning up after shim disconnected" id=c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca namespace=k8s.io Jul 11 00:33:41.653223 env[1319]: time="2025-07-11T00:33:41.653060796Z" level=info msg="cleaning up dead shim" Jul 11 00:33:41.660692 env[1319]: time="2025-07-11T00:33:41.660642791Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:33:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2731 runtime=io.containerd.runc.v2\n" Jul 11 00:33:42.514257 kubelet[2072]: E0711 00:33:42.509160 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:42.514257 kubelet[2072]: E0711 00:33:42.512678 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:42.528820 env[1319]: time="2025-07-11T00:33:42.515237414Z" level=info msg="CreateContainer within sandbox \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:33:42.546899 env[1319]: time="2025-07-11T00:33:42.546781866Z" level=info msg="CreateContainer within sandbox \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\"" Jul 11 00:33:42.550947 env[1319]: time="2025-07-11T00:33:42.550887029Z" level=info msg="StartContainer for \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\"" Jul 11 00:33:42.632371 env[1319]: time="2025-07-11T00:33:42.632305021Z" level=info msg="StartContainer for \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\" returns successfully" Jul 11 00:33:42.792481 kubelet[2072]: I0711 00:33:42.792378 2072 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 00:33:42.816494 kubelet[2072]: W0711 00:33:42.816449 2072 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 11 00:33:42.817548 kubelet[2072]: E0711 00:33:42.817522 2072 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:33:42.912660 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 11 00:33:42.921172 kubelet[2072]: I0711 00:33:42.921127 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8ee0a98-8921-45c7-83f2-d805222561c3-config-volume\") pod \"coredns-7c65d6cfc9-qczl9\" (UID: \"e8ee0a98-8921-45c7-83f2-d805222561c3\") " pod="kube-system/coredns-7c65d6cfc9-qczl9" Jul 11 00:33:42.921172 kubelet[2072]: I0711 00:33:42.921173 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdd2\" (UniqueName: \"kubernetes.io/projected/e8ee0a98-8921-45c7-83f2-d805222561c3-kube-api-access-stdd2\") pod \"coredns-7c65d6cfc9-qczl9\" (UID: \"e8ee0a98-8921-45c7-83f2-d805222561c3\") " pod="kube-system/coredns-7c65d6cfc9-qczl9" Jul 11 00:33:42.921308 kubelet[2072]: I0711 00:33:42.921197 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w842p\" (UniqueName: \"kubernetes.io/projected/9d057eae-ff5b-422c-ab97-a01e8b5fce35-kube-api-access-w842p\") pod \"coredns-7c65d6cfc9-cg9c8\" (UID: \"9d057eae-ff5b-422c-ab97-a01e8b5fce35\") " pod="kube-system/coredns-7c65d6cfc9-cg9c8" Jul 11 00:33:42.921308 kubelet[2072]: I0711 00:33:42.921224 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d057eae-ff5b-422c-ab97-a01e8b5fce35-config-volume\") pod \"coredns-7c65d6cfc9-cg9c8\" (UID: \"9d057eae-ff5b-422c-ab97-a01e8b5fce35\") " pod="kube-system/coredns-7c65d6cfc9-cg9c8" Jul 11 00:33:43.174672 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 11 00:33:43.514680 kubelet[2072]: E0711 00:33:43.513460 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:43.528738 kubelet[2072]: I0711 00:33:43.528684 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rpcgb" podStartSLOduration=4.892897555 podStartE2EDuration="12.528666369s" podCreationTimestamp="2025-07-11 00:33:31 +0000 UTC" firstStartedPulling="2025-07-11 00:33:31.573026396 +0000 UTC m=+6.228343232" lastFinishedPulling="2025-07-11 00:33:39.20879521 +0000 UTC m=+13.864112046" observedRunningTime="2025-07-11 00:33:43.5284248 +0000 UTC m=+18.183741636" watchObservedRunningTime="2025-07-11 00:33:43.528666369 +0000 UTC m=+18.183983205" Jul 11 00:33:44.016131 kubelet[2072]: E0711 00:33:44.016101 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:44.020775 kubelet[2072]: E0711 00:33:44.019501 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:44.020967 env[1319]: time="2025-07-11T00:33:44.020920628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cg9c8,Uid:9d057eae-ff5b-422c-ab97-a01e8b5fce35,Namespace:kube-system,Attempt:0,}" Jul 11 00:33:44.022083 env[1319]: time="2025-07-11T00:33:44.022050309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qczl9,Uid:e8ee0a98-8921-45c7-83f2-d805222561c3,Namespace:kube-system,Attempt:0,}" Jul 11 00:33:44.515308 kubelet[2072]: E0711 00:33:44.515276 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:44.793493 systemd-networkd[1094]: cilium_host: Link UP Jul 11 00:33:44.795762 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 11 00:33:44.795797 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 11 00:33:44.793596 systemd-networkd[1094]: cilium_net: Link UP Jul 11 00:33:44.795226 systemd-networkd[1094]: cilium_net: Gained carrier Jul 11 00:33:44.795408 systemd-networkd[1094]: cilium_host: Gained carrier Jul 11 00:33:44.795502 systemd-networkd[1094]: cilium_net: Gained IPv6LL Jul 11 00:33:44.796309 systemd-networkd[1094]: cilium_host: Gained IPv6LL Jul 11 00:33:44.870225 systemd-networkd[1094]: cilium_vxlan: Link UP Jul 11 00:33:44.870237 systemd-networkd[1094]: cilium_vxlan: Gained carrier Jul 11 00:33:45.156676 kernel: NET: Registered PF_ALG protocol family Jul 11 00:33:45.517254 kubelet[2072]: E0711 00:33:45.517143 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:45.752427 systemd-networkd[1094]: lxc_health: Link UP Jul 11 00:33:45.767697 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 11 00:33:45.770648 systemd-networkd[1094]: lxc_health: Gained carrier Jul 11 00:33:46.106686 systemd-networkd[1094]: lxcb4a69c9536bf: Link UP Jul 11 00:33:46.119604 systemd-networkd[1094]: lxca9c00352364e: Link UP Jul 11 00:33:46.128680 kernel: eth0: renamed from tmpe20cb Jul 11 00:33:46.135679 kernel: eth0: renamed from tmp11e11 Jul 11 00:33:46.142334 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 11 00:33:46.142417 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca9c00352364e: link becomes ready Jul 11 00:33:46.143004 systemd-networkd[1094]: lxca9c00352364e: Gained carrier Jul 11 00:33:46.145691 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb4a69c9536bf: link becomes ready Jul 11 00:33:46.145865 systemd-networkd[1094]: lxcb4a69c9536bf: Gained carrier Jul 11 00:33:46.342866 systemd-networkd[1094]: cilium_vxlan: Gained IPv6LL Jul 11 00:33:47.046786 systemd-networkd[1094]: lxc_health: Gained IPv6LL Jul 11 00:33:47.488898 kubelet[2072]: E0711 00:33:47.488863 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:47.814798 systemd-networkd[1094]: lxcb4a69c9536bf: Gained IPv6LL Jul 11 00:33:48.134817 systemd-networkd[1094]: lxca9c00352364e: Gained IPv6LL Jul 11 00:33:49.531892 env[1319]: time="2025-07-11T00:33:49.531724604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:33:49.531892 env[1319]: time="2025-07-11T00:33:49.531771245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:33:49.531892 env[1319]: time="2025-07-11T00:33:49.531782125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:33:49.533696 env[1319]: time="2025-07-11T00:33:49.532700512Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e20cb4148cd958a87a9e249bc3fa3886bad165ebd7b80a28d2c96104b6b5df24 pid=3292 runtime=io.containerd.runc.v2 Jul 11 00:33:49.573747 env[1319]: time="2025-07-11T00:33:49.573671908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:33:49.573747 env[1319]: time="2025-07-11T00:33:49.573736430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:33:49.573978 env[1319]: time="2025-07-11T00:33:49.573752311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:33:49.574091 env[1319]: time="2025-07-11T00:33:49.574052519Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11e11bb5027aad7cb8114c62cd37baf05cf5ced7042b718e6c2c27ec51ce4f88 pid=3321 runtime=io.containerd.runc.v2 Jul 11 00:33:49.591331 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:33:49.615466 env[1319]: time="2025-07-11T00:33:49.615429407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cg9c8,Uid:9d057eae-ff5b-422c-ab97-a01e8b5fce35,Namespace:kube-system,Attempt:0,} returns sandbox id \"e20cb4148cd958a87a9e249bc3fa3886bad165ebd7b80a28d2c96104b6b5df24\"" Jul 11 00:33:49.616804 kubelet[2072]: E0711 00:33:49.616216 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:49.617944 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:33:49.624661 env[1319]: time="2025-07-11T00:33:49.620977249Z" level=info msg="CreateContainer within sandbox \"e20cb4148cd958a87a9e249bc3fa3886bad165ebd7b80a28d2c96104b6b5df24\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:33:49.636408 env[1319]: time="2025-07-11T00:33:49.636367019Z" level=info msg="CreateContainer within sandbox \"e20cb4148cd958a87a9e249bc3fa3886bad165ebd7b80a28d2c96104b6b5df24\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a6eeb1d1c994de7878d8f50d03bb7e589726b9f68a5213496997e4e006d06a5\"" Jul 11 00:33:49.637130 env[1319]: time="2025-07-11T00:33:49.637089960Z" level=info msg="StartContainer for \"2a6eeb1d1c994de7878d8f50d03bb7e589726b9f68a5213496997e4e006d06a5\"" Jul 11 00:33:49.638713 env[1319]: time="2025-07-11T00:33:49.638683286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qczl9,Uid:e8ee0a98-8921-45c7-83f2-d805222561c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"11e11bb5027aad7cb8114c62cd37baf05cf5ced7042b718e6c2c27ec51ce4f88\"" Jul 11 00:33:49.639321 kubelet[2072]: E0711 00:33:49.639291 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:49.641341 env[1319]: time="2025-07-11T00:33:49.641272962Z" level=info msg="CreateContainer within sandbox \"11e11bb5027aad7cb8114c62cd37baf05cf5ced7042b718e6c2c27ec51ce4f88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:33:49.651312 env[1319]: time="2025-07-11T00:33:49.651271254Z" level=info msg="CreateContainer within sandbox \"11e11bb5027aad7cb8114c62cd37baf05cf5ced7042b718e6c2c27ec51ce4f88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ee3b34c080e0d0df3b9c10bd31ed05136be3e5ef22937f59615292c84516cab\"" Jul 11 00:33:49.651970 env[1319]: time="2025-07-11T00:33:49.651942433Z" level=info msg="StartContainer for \"8ee3b34c080e0d0df3b9c10bd31ed05136be3e5ef22937f59615292c84516cab\"" Jul 11 00:33:49.697321 env[1319]: time="2025-07-11T00:33:49.697275037Z" level=info msg="StartContainer for \"2a6eeb1d1c994de7878d8f50d03bb7e589726b9f68a5213496997e4e006d06a5\" returns successfully" Jul 11 00:33:49.708123 env[1319]: time="2025-07-11T00:33:49.708071112Z" level=info msg="StartContainer for \"8ee3b34c080e0d0df3b9c10bd31ed05136be3e5ef22937f59615292c84516cab\" returns successfully" Jul 11 00:33:50.528876 kubelet[2072]: E0711 00:33:50.528609 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:50.530692 kubelet[2072]: E0711 00:33:50.530670 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:50.546075 kubelet[2072]: I0711 00:33:50.546012 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qczl9" podStartSLOduration=19.546000034 podStartE2EDuration="19.546000034s" podCreationTimestamp="2025-07-11 00:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:33:50.545330855 +0000 UTC m=+25.200647651" watchObservedRunningTime="2025-07-11 00:33:50.546000034 +0000 UTC m=+25.201316870" Jul 11 00:33:50.566662 kubelet[2072]: I0711 00:33:50.566550 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cg9c8" podStartSLOduration=19.56653145 podStartE2EDuration="19.56653145s" podCreationTimestamp="2025-07-11 00:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:33:50.556975262 +0000 UTC m=+25.212292098" watchObservedRunningTime="2025-07-11 00:33:50.56653145 +0000 UTC m=+25.221848246" Jul 11 00:33:51.532832 kubelet[2072]: E0711 00:33:51.532804 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:51.533224 kubelet[2072]: E0711 00:33:51.532853 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:52.534878 kubelet[2072]: E0711 00:33:52.534825 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:52.535409 kubelet[2072]: E0711 00:33:52.535373 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:52.837098 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:44382.service. Jul 11 00:33:52.871789 sshd[3450]: Accepted publickey for core from 10.0.0.1 port 44382 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:33:52.873043 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:33:52.876834 systemd-logind[1301]: New session 6 of user core. Jul 11 00:33:52.877684 systemd[1]: Started session-6.scope. Jul 11 00:33:52.994032 sshd[3450]: pam_unix(sshd:session): session closed for user core Jul 11 00:33:52.996207 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:44382.service: Deactivated successfully. Jul 11 00:33:52.997226 systemd-logind[1301]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:33:52.997288 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:33:52.998453 systemd-logind[1301]: Removed session 6. Jul 11 00:33:57.997179 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:44398.service. Jul 11 00:33:58.031817 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 44398 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:33:58.033235 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:33:58.038319 systemd[1]: Started session-7.scope. Jul 11 00:33:58.038510 systemd-logind[1301]: New session 7 of user core. Jul 11 00:33:58.161033 sshd[3466]: pam_unix(sshd:session): session closed for user core Jul 11 00:33:58.163549 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:44398.service: Deactivated successfully. Jul 11 00:33:58.164739 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:33:58.165071 systemd-logind[1301]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:33:58.165844 systemd-logind[1301]: Removed session 7. Jul 11 00:33:58.409904 kubelet[2072]: I0711 00:33:58.409858 2072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:33:58.410353 kubelet[2072]: E0711 00:33:58.410321 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:33:58.551839 kubelet[2072]: E0711 00:33:58.551797 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:03.164440 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:32776.service. Jul 11 00:34:03.198707 sshd[3485]: Accepted publickey for core from 10.0.0.1 port 32776 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:03.200321 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:03.205594 systemd[1]: Started session-8.scope. Jul 11 00:34:03.205940 systemd-logind[1301]: New session 8 of user core. Jul 11 00:34:03.328969 sshd[3485]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:03.332554 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:32776.service: Deactivated successfully. Jul 11 00:34:03.333412 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:34:03.335678 systemd-logind[1301]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:34:03.336945 systemd-logind[1301]: Removed session 8. Jul 11 00:34:08.331710 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:32778.service. Jul 11 00:34:08.370900 sshd[3500]: Accepted publickey for core from 10.0.0.1 port 32778 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:08.372627 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:08.376429 systemd-logind[1301]: New session 9 of user core. Jul 11 00:34:08.376904 systemd[1]: Started session-9.scope. Jul 11 00:34:08.492480 sshd[3500]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:08.494057 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:32794.service. Jul 11 00:34:08.495134 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:32778.service: Deactivated successfully. Jul 11 00:34:08.496327 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:34:08.496349 systemd-logind[1301]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:34:08.497877 systemd-logind[1301]: Removed session 9. Jul 11 00:34:08.530460 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 32794 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:08.530897 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:08.535029 systemd[1]: Started session-10.scope. Jul 11 00:34:08.535232 systemd-logind[1301]: New session 10 of user core. Jul 11 00:34:08.691740 sshd[3514]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:08.695660 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:32796.service. Jul 11 00:34:08.701165 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:32794.service: Deactivated successfully. Jul 11 00:34:08.702342 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:34:08.704816 systemd-logind[1301]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:34:08.709867 systemd-logind[1301]: Removed session 10. Jul 11 00:34:08.742036 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 32796 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:08.743429 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:08.746999 systemd-logind[1301]: New session 11 of user core. Jul 11 00:34:08.747866 systemd[1]: Started session-11.scope. Jul 11 00:34:08.866196 sshd[3528]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:08.868562 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:32796.service: Deactivated successfully. Jul 11 00:34:08.869670 systemd-logind[1301]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:34:08.869732 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:34:08.870504 systemd-logind[1301]: Removed session 11. Jul 11 00:34:13.868961 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:50050.service. Jul 11 00:34:13.904477 sshd[3546]: Accepted publickey for core from 10.0.0.1 port 50050 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:13.905760 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:13.909150 systemd-logind[1301]: New session 12 of user core. Jul 11 00:34:13.910002 systemd[1]: Started session-12.scope. Jul 11 00:34:14.018092 sshd[3546]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:14.020626 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:50050.service: Deactivated successfully. Jul 11 00:34:14.022001 systemd-logind[1301]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:34:14.022057 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:34:14.022878 systemd-logind[1301]: Removed session 12. Jul 11 00:34:19.023437 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:50052.service. Jul 11 00:34:19.056753 sshd[3560]: Accepted publickey for core from 10.0.0.1 port 50052 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:19.057935 sshd[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:19.061823 systemd-logind[1301]: New session 13 of user core. Jul 11 00:34:19.062601 systemd[1]: Started session-13.scope. Jul 11 00:34:19.170024 sshd[3560]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:19.172395 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:50068.service. Jul 11 00:34:19.174916 systemd-logind[1301]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:34:19.175090 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:50052.service: Deactivated successfully. Jul 11 00:34:19.175871 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:34:19.176259 systemd-logind[1301]: Removed session 13. Jul 11 00:34:19.204943 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 50068 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:19.206581 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:19.209836 systemd-logind[1301]: New session 14 of user core. Jul 11 00:34:19.210698 systemd[1]: Started session-14.scope. Jul 11 00:34:19.426993 sshd[3572]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:19.429029 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:50082.service. Jul 11 00:34:19.430339 systemd-logind[1301]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:34:19.430511 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:50068.service: Deactivated successfully. Jul 11 00:34:19.431239 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:34:19.431713 systemd-logind[1301]: Removed session 14. Jul 11 00:34:19.465072 sshd[3584]: Accepted publickey for core from 10.0.0.1 port 50082 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:19.466538 sshd[3584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:19.470278 systemd-logind[1301]: New session 15 of user core. Jul 11 00:34:19.471167 systemd[1]: Started session-15.scope. Jul 11 00:34:20.628066 sshd[3584]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:20.629866 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:50094.service. Jul 11 00:34:20.632728 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:50082.service: Deactivated successfully. Jul 11 00:34:20.633748 systemd-logind[1301]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:34:20.633861 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:34:20.634804 systemd-logind[1301]: Removed session 15. Jul 11 00:34:20.668981 sshd[3602]: Accepted publickey for core from 10.0.0.1 port 50094 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:20.670701 sshd[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:20.674780 systemd-logind[1301]: New session 16 of user core. Jul 11 00:34:20.674880 systemd[1]: Started session-16.scope. Jul 11 00:34:20.897087 sshd[3602]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:20.898860 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:50096.service. Jul 11 00:34:20.900854 systemd-logind[1301]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:34:20.901007 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:50094.service: Deactivated successfully. Jul 11 00:34:20.901736 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:34:20.902223 systemd-logind[1301]: Removed session 16. Jul 11 00:34:20.933785 sshd[3617]: Accepted publickey for core from 10.0.0.1 port 50096 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:20.934868 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:20.938073 systemd-logind[1301]: New session 17 of user core. Jul 11 00:34:20.938819 systemd[1]: Started session-17.scope. Jul 11 00:34:21.047526 sshd[3617]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:21.049858 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:50096.service: Deactivated successfully. Jul 11 00:34:21.050804 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:34:21.052853 systemd-logind[1301]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:34:21.053984 systemd-logind[1301]: Removed session 17. Jul 11 00:34:26.050745 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:52320.service. Jul 11 00:34:26.084604 sshd[3638]: Accepted publickey for core from 10.0.0.1 port 52320 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:26.085804 sshd[3638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:26.089061 systemd-logind[1301]: New session 18 of user core. Jul 11 00:34:26.089884 systemd[1]: Started session-18.scope. Jul 11 00:34:26.197969 sshd[3638]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:26.200380 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:52320.service: Deactivated successfully. Jul 11 00:34:26.201328 systemd-logind[1301]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:34:26.201361 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:34:26.202044 systemd-logind[1301]: Removed session 18. Jul 11 00:34:31.201283 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:52336.service. Jul 11 00:34:31.233270 sshd[3652]: Accepted publickey for core from 10.0.0.1 port 52336 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:31.234440 sshd[3652]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:31.238284 systemd-logind[1301]: New session 19 of user core. Jul 11 00:34:31.238462 systemd[1]: Started session-19.scope. Jul 11 00:34:31.343003 sshd[3652]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:31.345329 systemd-logind[1301]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:34:31.345529 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:52336.service: Deactivated successfully. Jul 11 00:34:31.346269 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:34:31.346640 systemd-logind[1301]: Removed session 19. Jul 11 00:34:36.346188 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:60576.service. Jul 11 00:34:36.377999 sshd[3668]: Accepted publickey for core from 10.0.0.1 port 60576 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:36.379195 sshd[3668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:36.382644 systemd-logind[1301]: New session 20 of user core. Jul 11 00:34:36.383474 systemd[1]: Started session-20.scope. Jul 11 00:34:36.488360 sshd[3668]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:36.490889 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:60576.service: Deactivated successfully. Jul 11 00:34:36.491809 systemd-logind[1301]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:34:36.491849 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:34:36.492529 systemd-logind[1301]: Removed session 20. Jul 11 00:34:40.451298 kubelet[2072]: E0711 00:34:40.451263 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:41.491295 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:60588.service. Jul 11 00:34:41.523570 sshd[3682]: Accepted publickey for core from 10.0.0.1 port 60588 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:41.524826 sshd[3682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:41.528995 systemd-logind[1301]: New session 21 of user core. Jul 11 00:34:41.529869 systemd[1]: Started session-21.scope. Jul 11 00:34:41.642308 sshd[3682]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:41.644831 systemd[1]: Started sshd@21-10.0.0.84:22-10.0.0.1:60590.service. Jul 11 00:34:41.646188 systemd-logind[1301]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:34:41.646348 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:60588.service: Deactivated successfully. Jul 11 00:34:41.647190 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:34:41.647603 systemd-logind[1301]: Removed session 21. Jul 11 00:34:41.677481 sshd[3694]: Accepted publickey for core from 10.0.0.1 port 60590 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:41.679064 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:41.682466 systemd-logind[1301]: New session 22 of user core. Jul 11 00:34:41.683358 systemd[1]: Started session-22.scope. Jul 11 00:34:42.451506 kubelet[2072]: E0711 00:34:42.451135 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:43.454591 env[1319]: time="2025-07-11T00:34:43.454539810Z" level=info msg="StopContainer for \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\" with timeout 30 (s)" Jul 11 00:34:43.455122 env[1319]: time="2025-07-11T00:34:43.455097098Z" level=info msg="Stop container \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\" with signal terminated" Jul 11 00:34:43.493141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a-rootfs.mount: Deactivated successfully. Jul 11 00:34:43.495282 env[1319]: time="2025-07-11T00:34:43.495219885Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:34:43.499336 env[1319]: time="2025-07-11T00:34:43.499286140Z" level=info msg="StopContainer for \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\" with timeout 2 (s)" Jul 11 00:34:43.499596 env[1319]: time="2025-07-11T00:34:43.499567664Z" level=info msg="Stop container \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\" with signal terminated" Jul 11 00:34:43.504517 env[1319]: time="2025-07-11T00:34:43.504465251Z" level=info msg="shim disconnected" id=0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a Jul 11 00:34:43.504744 env[1319]: time="2025-07-11T00:34:43.504524371Z" level=warning msg="cleaning up after shim disconnected" id=0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a namespace=k8s.io Jul 11 00:34:43.504744 env[1319]: time="2025-07-11T00:34:43.504537412Z" level=info msg="cleaning up dead shim" Jul 11 00:34:43.505372 systemd-networkd[1094]: lxc_health: Link DOWN Jul 11 00:34:43.505378 systemd-networkd[1094]: lxc_health: Lost carrier Jul 11 00:34:43.512341 env[1319]: time="2025-07-11T00:34:43.512299957Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:34:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3750 runtime=io.containerd.runc.v2\n" Jul 11 00:34:43.514531 env[1319]: time="2025-07-11T00:34:43.514479867Z" level=info msg="StopContainer for \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\" returns successfully" Jul 11 00:34:43.515113 env[1319]: time="2025-07-11T00:34:43.515087435Z" level=info msg="StopPodSandbox for \"a65e435b4aa7bfc6743e7acac09b396225e0811e4362030142fe6d06f88686c2\"" Jul 11 00:34:43.515170 env[1319]: time="2025-07-11T00:34:43.515149996Z" level=info msg="Container to stop \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:34:43.517071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a65e435b4aa7bfc6743e7acac09b396225e0811e4362030142fe6d06f88686c2-shm.mount: Deactivated successfully. Jul 11 00:34:43.556777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a65e435b4aa7bfc6743e7acac09b396225e0811e4362030142fe6d06f88686c2-rootfs.mount: Deactivated successfully. Jul 11 00:34:43.560576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff-rootfs.mount: Deactivated successfully. Jul 11 00:34:43.563186 env[1319]: time="2025-07-11T00:34:43.563140090Z" level=info msg="shim disconnected" id=a65e435b4aa7bfc6743e7acac09b396225e0811e4362030142fe6d06f88686c2 Jul 11 00:34:43.563340 env[1319]: time="2025-07-11T00:34:43.563187531Z" level=warning msg="cleaning up after shim disconnected" id=a65e435b4aa7bfc6743e7acac09b396225e0811e4362030142fe6d06f88686c2 namespace=k8s.io Jul 11 00:34:43.563340 env[1319]: time="2025-07-11T00:34:43.563204451Z" level=info msg="cleaning up dead shim" Jul 11 00:34:43.564328 env[1319]: time="2025-07-11T00:34:43.564296426Z" level=info msg="shim disconnected" id=a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff Jul 11 00:34:43.564456 env[1319]: time="2025-07-11T00:34:43.564437988Z" level=warning msg="cleaning up after shim disconnected" id=a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff namespace=k8s.io Jul 11 00:34:43.564541 env[1319]: time="2025-07-11T00:34:43.564526749Z" level=info msg="cleaning up dead shim" Jul 11 00:34:43.571063 env[1319]: time="2025-07-11T00:34:43.571015278Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:34:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3799 runtime=io.containerd.runc.v2\n" Jul 11 00:34:43.571368 env[1319]: time="2025-07-11T00:34:43.571341682Z" level=info msg="TearDown network for sandbox \"a65e435b4aa7bfc6743e7acac09b396225e0811e4362030142fe6d06f88686c2\" successfully" Jul 11 00:34:43.571414 env[1319]: time="2025-07-11T00:34:43.571368722Z" level=info msg="StopPodSandbox for \"a65e435b4aa7bfc6743e7acac09b396225e0811e4362030142fe6d06f88686c2\" returns successfully" Jul 11 00:34:43.571791 env[1319]: time="2025-07-11T00:34:43.571761528Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:34:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3803 runtime=io.containerd.runc.v2\n" Jul 11 00:34:43.573699 env[1319]: time="2025-07-11T00:34:43.573649714Z" level=info msg="StopContainer for \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\" returns successfully" Jul 11 00:34:43.574165 env[1319]: time="2025-07-11T00:34:43.574141360Z" level=info msg="StopPodSandbox for \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\"" Jul 11 00:34:43.575272 env[1319]: time="2025-07-11T00:34:43.574291482Z" level=info msg="Container to stop \"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:34:43.575399 env[1319]: time="2025-07-11T00:34:43.575373177Z" level=info msg="Container to stop \"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:34:43.575463 env[1319]: time="2025-07-11T00:34:43.575446418Z" level=info msg="Container to stop \"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:34:43.575541 env[1319]: time="2025-07-11T00:34:43.575522539Z" level=info msg="Container to stop \"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:34:43.575605 env[1319]: time="2025-07-11T00:34:43.575588700Z" level=info msg="Container to stop \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:34:43.598880 env[1319]: time="2025-07-11T00:34:43.598816857Z" level=info msg="shim disconnected" id=10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab Jul 11 00:34:43.598880 env[1319]: time="2025-07-11T00:34:43.598872537Z" level=warning msg="cleaning up after shim disconnected" id=10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab namespace=k8s.io Jul 11 00:34:43.598880 env[1319]: time="2025-07-11T00:34:43.598882577Z" level=info msg="cleaning up dead shim" Jul 11 00:34:43.613457 env[1319]: time="2025-07-11T00:34:43.613414415Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:34:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3845 runtime=io.containerd.runc.v2\n" Jul 11 00:34:43.613800 env[1319]: time="2025-07-11T00:34:43.613771060Z" level=info msg="TearDown network for sandbox \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" successfully" Jul 11 00:34:43.613850 env[1319]: time="2025-07-11T00:34:43.613798421Z" level=info msg="StopPodSandbox for \"10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab\" returns successfully" Jul 11 00:34:43.631661 kubelet[2072]: I0711 00:34:43.629072 2072 scope.go:117] "RemoveContainer" containerID="0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a" Jul 11 00:34:43.632123 env[1319]: time="2025-07-11T00:34:43.631114497Z" level=info msg="RemoveContainer for \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\"" Jul 11 00:34:43.635926 env[1319]: time="2025-07-11T00:34:43.635845001Z" level=info msg="RemoveContainer for \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\" returns successfully" Jul 11 00:34:43.636156 kubelet[2072]: I0711 00:34:43.636133 2072 scope.go:117] "RemoveContainer" containerID="0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a" Jul 11 00:34:43.636795 env[1319]: time="2025-07-11T00:34:43.636356288Z" level=error msg="ContainerStatus for \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\": not found" Jul 11 00:34:43.637439 kubelet[2072]: E0711 00:34:43.637387 2072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\": not found" containerID="0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a" Jul 11 00:34:43.637526 kubelet[2072]: I0711 00:34:43.637424 2072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a"} err="failed to get container status \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e02f9dbdce3c65cd2ec3ee3d4568f11d81bfc0f2ed1a0509c03dce434b4855a\": not found" Jul 11 00:34:43.637607 kubelet[2072]: I0711 00:34:43.637527 2072 scope.go:117] "RemoveContainer" containerID="a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff" Jul 11 00:34:43.640179 env[1319]: time="2025-07-11T00:34:43.640148460Z" level=info msg="RemoveContainer for \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\"" Jul 11 00:34:43.642652 env[1319]: time="2025-07-11T00:34:43.642607013Z" level=info msg="RemoveContainer for \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\" returns successfully" Jul 11 00:34:43.642846 kubelet[2072]: I0711 00:34:43.642812 2072 scope.go:117] "RemoveContainer" containerID="c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca" Jul 11 00:34:43.644079 env[1319]: time="2025-07-11T00:34:43.643979032Z" level=info msg="RemoveContainer for \"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca\"" Jul 11 00:34:43.646571 env[1319]: time="2025-07-11T00:34:43.646535227Z" level=info msg="RemoveContainer for \"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca\" returns successfully" Jul 11 00:34:43.646757 kubelet[2072]: I0711 00:34:43.646720 2072 scope.go:117] "RemoveContainer" containerID="855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00" Jul 11 00:34:43.647853 env[1319]: time="2025-07-11T00:34:43.647821684Z" level=info msg="RemoveContainer for \"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00\"" Jul 11 00:34:43.650138 env[1319]: time="2025-07-11T00:34:43.650095595Z" level=info msg="RemoveContainer for \"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00\" returns successfully" Jul 11 00:34:43.650392 kubelet[2072]: I0711 00:34:43.650304 2072 scope.go:117] "RemoveContainer" containerID="c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1" Jul 11 00:34:43.651433 env[1319]: time="2025-07-11T00:34:43.651406733Z" level=info msg="RemoveContainer for \"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1\"" Jul 11 00:34:43.653740 env[1319]: time="2025-07-11T00:34:43.653704885Z" level=info msg="RemoveContainer for \"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1\" returns successfully" Jul 11 00:34:43.653926 kubelet[2072]: I0711 00:34:43.653892 2072 scope.go:117] "RemoveContainer" containerID="704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7" Jul 11 00:34:43.654930 env[1319]: time="2025-07-11T00:34:43.654904701Z" level=info msg="RemoveContainer for \"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7\"" Jul 11 00:34:43.657041 env[1319]: time="2025-07-11T00:34:43.657004970Z" level=info msg="RemoveContainer for \"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7\" returns successfully" Jul 11 00:34:43.657210 kubelet[2072]: I0711 00:34:43.657177 2072 scope.go:117] "RemoveContainer" containerID="a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff" Jul 11 00:34:43.657451 env[1319]: time="2025-07-11T00:34:43.657384055Z" level=error msg="ContainerStatus for \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\": not found" Jul 11 00:34:43.657711 kubelet[2072]: E0711 00:34:43.657577 2072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\": not found" containerID="a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff" Jul 11 00:34:43.657711 kubelet[2072]: I0711 00:34:43.657607 2072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff"} err="failed to get container status \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\": rpc error: code = NotFound desc = an error occurred when try to find container \"a86c21d8d3e579ca4dec825416a8d2ce82f987b674a6e21e20a1100cd484fdff\": not found" Jul 11 00:34:43.657711 kubelet[2072]: I0711 00:34:43.657626 2072 scope.go:117] "RemoveContainer" containerID="c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca" Jul 11 00:34:43.657846 env[1319]: time="2025-07-11T00:34:43.657789660Z" level=error msg="ContainerStatus for \"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca\": not found" Jul 11 00:34:43.657990 kubelet[2072]: E0711 00:34:43.657954 2072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca\": not found" containerID="c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca" Jul 11 00:34:43.658030 kubelet[2072]: I0711 00:34:43.657984 2072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca"} err="failed to get container status \"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca\": rpc error: code = NotFound desc = an error occurred when try to find container \"c302f1df46e06571fd14265bf5b71cf397e9f47ba0e67e80ee49f97945a6beca\": not found" Jul 11 00:34:43.658030 kubelet[2072]: I0711 00:34:43.658006 2072 scope.go:117] "RemoveContainer" containerID="855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00" Jul 11 00:34:43.658185 env[1319]: time="2025-07-11T00:34:43.658147625Z" level=error msg="ContainerStatus for \"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00\": not found" Jul 11 00:34:43.658309 kubelet[2072]: E0711 00:34:43.658289 2072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00\": not found" containerID="855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00" Jul 11 00:34:43.658352 kubelet[2072]: I0711 00:34:43.658313 2072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00"} err="failed to get container status \"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00\": rpc error: code = NotFound desc = an error occurred when try to find container \"855d490cffb4fdf6a08987a82677da6cb46aa19b9ecb3f5ca7cb91b1a40f9c00\": not found" Jul 11 00:34:43.658352 kubelet[2072]: I0711 00:34:43.658329 2072 scope.go:117] "RemoveContainer" containerID="c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1" Jul 11 00:34:43.658642 env[1319]: time="2025-07-11T00:34:43.658526750Z" level=error msg="ContainerStatus for \"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1\": not found" Jul 11 00:34:43.658873 kubelet[2072]: E0711 00:34:43.658764 2072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1\": not found" containerID="c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1" Jul 11 00:34:43.658873 kubelet[2072]: I0711 00:34:43.658790 2072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1"} err="failed to get container status \"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"c12844d1409b874dea20dacca136fc7d82d37d07d4bf8c7d2f8dccb7d01ed8a1\": not found" Jul 11 00:34:43.658873 kubelet[2072]: I0711 00:34:43.658804 2072 scope.go:117] "RemoveContainer" containerID="704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7" Jul 11 00:34:43.658989 env[1319]: time="2025-07-11T00:34:43.658940916Z" level=error msg="ContainerStatus for \"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7\": not found" Jul 11 00:34:43.659058 kubelet[2072]: E0711 00:34:43.659038 2072 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7\": not found" containerID="704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7" Jul 11 00:34:43.659092 kubelet[2072]: I0711 00:34:43.659060 2072 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7"} err="failed to get container status \"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"704a83063f9601525168e6c03f6bc02cc9a33f6310c781f0d00f7187766d80a7\": not found" Jul 11 00:34:43.763742 kubelet[2072]: I0711 00:34:43.763541 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-cgroup\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.763742 kubelet[2072]: I0711 00:34:43.763581 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cni-path\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.763742 kubelet[2072]: I0711 00:34:43.763605 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx4z4\" (UniqueName: \"kubernetes.io/projected/7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a-kube-api-access-nx4z4\") pod \"7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a\" (UID: \"7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a\") " Jul 11 00:34:43.763742 kubelet[2072]: I0711 00:34:43.763638 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2918b70f-208d-44aa-96fa-7bf11f462149-hubble-tls\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.763742 kubelet[2072]: I0711 00:34:43.763658 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-hostproc\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.763742 kubelet[2072]: I0711 00:34:43.763673 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-lib-modules\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.764083 kubelet[2072]: I0711 00:34:43.763693 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-config-path\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.764083 kubelet[2072]: I0711 00:34:43.763707 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-etc-cni-netd\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.764083 kubelet[2072]: I0711 00:34:43.763720 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-run\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.764083 kubelet[2072]: I0711 00:34:43.763736 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-host-proc-sys-net\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.764083 kubelet[2072]: I0711 00:34:43.763753 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-host-proc-sys-kernel\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.764083 kubelet[2072]: I0711 00:34:43.763767 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-xtables-lock\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.764257 kubelet[2072]: I0711 00:34:43.763783 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a-cilium-config-path\") pod \"7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a\" (UID: \"7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a\") " Jul 11 00:34:43.764257 kubelet[2072]: I0711 00:34:43.763801 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2918b70f-208d-44aa-96fa-7bf11f462149-clustermesh-secrets\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.764257 kubelet[2072]: I0711 00:34:43.763817 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl2jj\" (UniqueName: \"kubernetes.io/projected/2918b70f-208d-44aa-96fa-7bf11f462149-kube-api-access-bl2jj\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.764257 kubelet[2072]: I0711 00:34:43.763833 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-bpf-maps\") pod \"2918b70f-208d-44aa-96fa-7bf11f462149\" (UID: \"2918b70f-208d-44aa-96fa-7bf11f462149\") " Jul 11 00:34:43.768435 kubelet[2072]: I0711 00:34:43.768115 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:43.768435 kubelet[2072]: I0711 00:34:43.768116 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:43.768435 kubelet[2072]: I0711 00:34:43.768428 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:43.770261 kubelet[2072]: I0711 00:34:43.770224 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:34:43.770314 kubelet[2072]: I0711 00:34:43.770278 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:43.770314 kubelet[2072]: I0711 00:34:43.770295 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:43.770314 kubelet[2072]: I0711 00:34:43.770310 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:43.770395 kubelet[2072]: I0711 00:34:43.770327 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-hostproc" (OuterVolumeSpecName: "hostproc") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:43.770578 kubelet[2072]: I0711 00:34:43.770539 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:43.771259 kubelet[2072]: I0711 00:34:43.770986 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2918b70f-208d-44aa-96fa-7bf11f462149-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:34:43.771259 kubelet[2072]: I0711 00:34:43.771037 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:43.771259 kubelet[2072]: I0711 00:34:43.771055 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cni-path" (OuterVolumeSpecName: "cni-path") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:43.771708 kubelet[2072]: I0711 00:34:43.771686 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a-kube-api-access-nx4z4" (OuterVolumeSpecName: "kube-api-access-nx4z4") pod "7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a" (UID: "7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a"). InnerVolumeSpecName "kube-api-access-nx4z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:34:43.772288 kubelet[2072]: I0711 00:34:43.772252 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a" (UID: "7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:34:43.773275 kubelet[2072]: I0711 00:34:43.773243 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2918b70f-208d-44aa-96fa-7bf11f462149-kube-api-access-bl2jj" (OuterVolumeSpecName: "kube-api-access-bl2jj") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "kube-api-access-bl2jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:34:43.774840 kubelet[2072]: I0711 00:34:43.774811 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2918b70f-208d-44aa-96fa-7bf11f462149-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2918b70f-208d-44aa-96fa-7bf11f462149" (UID: "2918b70f-208d-44aa-96fa-7bf11f462149"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:34:43.864275 kubelet[2072]: I0711 00:34:43.864238 2072 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864275 kubelet[2072]: I0711 00:34:43.864270 2072 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864275 kubelet[2072]: I0711 00:34:43.864279 2072 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864275 kubelet[2072]: I0711 00:34:43.864287 2072 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864473 kubelet[2072]: I0711 00:34:43.864296 2072 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864473 kubelet[2072]: I0711 00:34:43.864305 2072 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864473 kubelet[2072]: I0711 00:34:43.864313 2072 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864473 kubelet[2072]: I0711 00:34:43.864320 2072 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864473 kubelet[2072]: I0711 00:34:43.864328 2072 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864473 kubelet[2072]: I0711 00:34:43.864337 2072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl2jj\" (UniqueName: \"kubernetes.io/projected/2918b70f-208d-44aa-96fa-7bf11f462149-kube-api-access-bl2jj\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864473 kubelet[2072]: I0711 00:34:43.864346 2072 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2918b70f-208d-44aa-96fa-7bf11f462149-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864473 kubelet[2072]: I0711 00:34:43.864355 2072 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864692 kubelet[2072]: I0711 00:34:43.864362 2072 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864692 kubelet[2072]: I0711 00:34:43.864369 2072 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2918b70f-208d-44aa-96fa-7bf11f462149-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864692 kubelet[2072]: I0711 00:34:43.864379 2072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx4z4\" (UniqueName: \"kubernetes.io/projected/7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a-kube-api-access-nx4z4\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:43.864692 kubelet[2072]: I0711 00:34:43.864386 2072 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2918b70f-208d-44aa-96fa-7bf11f462149-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:44.463064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab-rootfs.mount: Deactivated successfully. Jul 11 00:34:44.463218 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10f4aafa68a711c4fb1c38313758864d1cbe49ce4f1471574d145e432666acab-shm.mount: Deactivated successfully. Jul 11 00:34:44.463308 systemd[1]: var-lib-kubelet-pods-2918b70f\x2d208d\x2d44aa\x2d96fa\x2d7bf11f462149-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbl2jj.mount: Deactivated successfully. Jul 11 00:34:44.463387 systemd[1]: var-lib-kubelet-pods-7c6f7ff5\x2dc18c\x2d4e5a\x2dbeda\x2d3fb29c8fd00a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnx4z4.mount: Deactivated successfully. Jul 11 00:34:44.463466 systemd[1]: var-lib-kubelet-pods-2918b70f\x2d208d\x2d44aa\x2d96fa\x2d7bf11f462149-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:34:44.463558 systemd[1]: var-lib-kubelet-pods-2918b70f\x2d208d\x2d44aa\x2d96fa\x2d7bf11f462149-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:34:45.412495 sshd[3694]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:45.414908 systemd[1]: Started sshd@22-10.0.0.84:22-10.0.0.1:33766.service. Jul 11 00:34:45.415926 systemd[1]: sshd@21-10.0.0.84:22-10.0.0.1:60590.service: Deactivated successfully. Jul 11 00:34:45.416839 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:34:45.417712 systemd-logind[1301]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:34:45.420786 systemd-logind[1301]: Removed session 22. Jul 11 00:34:45.449231 sshd[3861]: Accepted publickey for core from 10.0.0.1 port 33766 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:45.450582 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:45.453817 kubelet[2072]: I0711 00:34:45.453775 2072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2918b70f-208d-44aa-96fa-7bf11f462149" path="/var/lib/kubelet/pods/2918b70f-208d-44aa-96fa-7bf11f462149/volumes" Jul 11 00:34:45.454341 kubelet[2072]: I0711 00:34:45.454319 2072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a" path="/var/lib/kubelet/pods/7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a/volumes" Jul 11 00:34:45.457000 systemd-logind[1301]: New session 23 of user core. Jul 11 00:34:45.457899 systemd[1]: Started session-23.scope. Jul 11 00:34:45.508192 kubelet[2072]: E0711 00:34:45.508149 2072 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:34:46.706824 systemd[1]: Started sshd@23-10.0.0.84:22-10.0.0.1:33780.service. Jul 11 00:34:46.707402 sshd[3861]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:46.710615 systemd[1]: sshd@22-10.0.0.84:22-10.0.0.1:33766.service: Deactivated successfully. Jul 11 00:34:46.711822 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:34:46.711865 systemd-logind[1301]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:34:46.714073 systemd-logind[1301]: Removed session 23. Jul 11 00:34:46.729658 kubelet[2072]: E0711 00:34:46.721072 2072 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2918b70f-208d-44aa-96fa-7bf11f462149" containerName="mount-bpf-fs" Jul 11 00:34:46.729658 kubelet[2072]: E0711 00:34:46.721102 2072 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a" containerName="cilium-operator" Jul 11 00:34:46.729658 kubelet[2072]: E0711 00:34:46.721109 2072 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2918b70f-208d-44aa-96fa-7bf11f462149" containerName="clean-cilium-state" Jul 11 00:34:46.729658 kubelet[2072]: E0711 00:34:46.721115 2072 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2918b70f-208d-44aa-96fa-7bf11f462149" containerName="cilium-agent" Jul 11 00:34:46.729658 kubelet[2072]: E0711 00:34:46.721121 2072 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2918b70f-208d-44aa-96fa-7bf11f462149" containerName="mount-cgroup" Jul 11 00:34:46.729658 kubelet[2072]: E0711 00:34:46.721127 2072 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2918b70f-208d-44aa-96fa-7bf11f462149" containerName="apply-sysctl-overwrites" Jul 11 00:34:46.729658 kubelet[2072]: I0711 00:34:46.721152 2072 memory_manager.go:354] "RemoveStaleState removing state" podUID="2918b70f-208d-44aa-96fa-7bf11f462149" containerName="cilium-agent" Jul 11 00:34:46.729658 kubelet[2072]: I0711 00:34:46.721158 2072 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6f7ff5-c18c-4e5a-beda-3fb29c8fd00a" containerName="cilium-operator" Jul 11 00:34:46.764242 sshd[3874]: Accepted publickey for core from 10.0.0.1 port 33780 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:46.765607 sshd[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:46.769392 systemd-logind[1301]: New session 24 of user core. Jul 11 00:34:46.770039 systemd[1]: Started session-24.scope. Jul 11 00:34:46.877273 kubelet[2072]: I0711 00:34:46.877209 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-bpf-maps\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877441 kubelet[2072]: I0711 00:34:46.877423 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-etc-cni-netd\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877508 kubelet[2072]: I0711 00:34:46.877496 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-lib-modules\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877668 kubelet[2072]: I0711 00:34:46.877625 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-hubble-tls\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877716 kubelet[2072]: I0711 00:34:46.877696 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-host-proc-sys-kernel\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877742 kubelet[2072]: I0711 00:34:46.877719 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-hostproc\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877786 kubelet[2072]: I0711 00:34:46.877737 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-config-path\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877823 kubelet[2072]: I0711 00:34:46.877792 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-host-proc-sys-net\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877823 kubelet[2072]: I0711 00:34:46.877810 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-cgroup\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877873 kubelet[2072]: I0711 00:34:46.877850 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-clustermesh-secrets\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877873 kubelet[2072]: I0711 00:34:46.877870 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cni-path\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877916 kubelet[2072]: I0711 00:34:46.877885 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-xtables-lock\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877939 kubelet[2072]: I0711 00:34:46.877914 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9t5z\" (UniqueName: \"kubernetes.io/projected/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-kube-api-access-x9t5z\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.877986 kubelet[2072]: I0711 00:34:46.877960 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-run\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.878019 kubelet[2072]: I0711 00:34:46.878007 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-ipsec-secrets\") pod \"cilium-s84gd\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " pod="kube-system/cilium-s84gd" Jul 11 00:34:46.893126 sshd[3874]: pam_unix(sshd:session): session closed for user core Jul 11 00:34:46.895935 systemd[1]: Started sshd@24-10.0.0.84:22-10.0.0.1:33782.service. Jul 11 00:34:46.904677 kubelet[2072]: E0711 00:34:46.902241 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-x9t5z lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-s84gd" podUID="01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" Jul 11 00:34:46.904364 systemd[1]: sshd@23-10.0.0.84:22-10.0.0.1:33780.service: Deactivated successfully. Jul 11 00:34:46.905194 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:34:46.910748 systemd-logind[1301]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:34:46.912091 systemd-logind[1301]: Removed session 24. Jul 11 00:34:46.934827 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 33782 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:34:46.936443 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:34:46.940538 systemd-logind[1301]: New session 25 of user core. Jul 11 00:34:46.941037 systemd[1]: Started session-25.scope. Jul 11 00:34:47.549379 kubelet[2072]: I0711 00:34:47.549331 2072 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T00:34:47Z","lastTransitionTime":"2025-07-11T00:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 00:34:47.782178 kubelet[2072]: I0711 00:34:47.782140 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:47.782559 kubelet[2072]: I0711 00:34:47.782542 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-host-proc-sys-net\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.782677 kubelet[2072]: I0711 00:34:47.782665 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-etc-cni-netd\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.783084 kubelet[2072]: I0711 00:34:47.782794 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-clustermesh-secrets\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.783229 kubelet[2072]: I0711 00:34:47.783215 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-host-proc-sys-kernel\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.783342 kubelet[2072]: I0711 00:34:47.783332 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-hostproc\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.783430 kubelet[2072]: I0711 00:34:47.783418 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-xtables-lock\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.783514 kubelet[2072]: I0711 00:34:47.783502 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-cgroup\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.783616 kubelet[2072]: I0711 00:34:47.783603 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9t5z\" (UniqueName: \"kubernetes.io/projected/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-kube-api-access-x9t5z\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.783720 kubelet[2072]: I0711 00:34:47.783708 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-lib-modules\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.783801 kubelet[2072]: I0711 00:34:47.783790 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-run\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.783882 kubelet[2072]: I0711 00:34:47.783870 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-bpf-maps\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.783948 kubelet[2072]: I0711 00:34:47.782758 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:47.783987 kubelet[2072]: I0711 00:34:47.783277 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:47.783987 kubelet[2072]: I0711 00:34:47.783374 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-hostproc" (OuterVolumeSpecName: "hostproc") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:47.783987 kubelet[2072]: I0711 00:34:47.783467 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:47.783987 kubelet[2072]: I0711 00:34:47.783552 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:47.783987 kubelet[2072]: I0711 00:34:47.783797 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:47.784111 kubelet[2072]: I0711 00:34:47.783820 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:47.784111 kubelet[2072]: I0711 00:34:47.784037 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:47.784198 kubelet[2072]: I0711 00:34:47.784185 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-config-path\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.784290 kubelet[2072]: I0711 00:34:47.784279 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cni-path\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.784373 kubelet[2072]: I0711 00:34:47.784362 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-ipsec-secrets\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.784457 kubelet[2072]: I0711 00:34:47.784445 2072 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-hubble-tls\") pod \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\" (UID: \"01f4cdbb-8898-48c0-b7c8-021a22fe2a7c\") " Jul 11 00:34:47.784578 kubelet[2072]: I0711 00:34:47.784565 2072 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.784670 kubelet[2072]: I0711 00:34:47.784660 2072 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.784856 kubelet[2072]: I0711 00:34:47.784838 2072 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.784916 kubelet[2072]: I0711 00:34:47.784858 2072 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.784916 kubelet[2072]: I0711 00:34:47.784869 2072 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.784916 kubelet[2072]: I0711 00:34:47.784878 2072 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.784916 kubelet[2072]: I0711 00:34:47.784895 2072 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.784916 kubelet[2072]: I0711 00:34:47.784903 2072 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.784916 kubelet[2072]: I0711 00:34:47.784911 2072 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.784916 kubelet[2072]: I0711 00:34:47.784355 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cni-path" (OuterVolumeSpecName: "cni-path") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 00:34:47.786898 systemd[1]: var-lib-kubelet-pods-01f4cdbb\x2d8898\x2d48c0\x2db7c8\x2d021a22fe2a7c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:34:47.789008 systemd[1]: var-lib-kubelet-pods-01f4cdbb\x2d8898\x2d48c0\x2db7c8\x2d021a22fe2a7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx9t5z.mount: Deactivated successfully. Jul 11 00:34:47.789134 systemd[1]: var-lib-kubelet-pods-01f4cdbb\x2d8898\x2d48c0\x2db7c8\x2d021a22fe2a7c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 11 00:34:47.789218 systemd[1]: var-lib-kubelet-pods-01f4cdbb\x2d8898\x2d48c0\x2db7c8\x2d021a22fe2a7c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:34:47.790612 kubelet[2072]: I0711 00:34:47.790589 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:34:47.790729 kubelet[2072]: I0711 00:34:47.790683 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:34:47.790789 kubelet[2072]: I0711 00:34:47.790744 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:34:47.790986 kubelet[2072]: I0711 00:34:47.790968 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:34:47.791065 kubelet[2072]: I0711 00:34:47.790992 2072 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-kube-api-access-x9t5z" (OuterVolumeSpecName: "kube-api-access-x9t5z") pod "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" (UID: "01f4cdbb-8898-48c0-b7c8-021a22fe2a7c"). InnerVolumeSpecName "kube-api-access-x9t5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:34:47.885391 kubelet[2072]: I0711 00:34:47.885366 2072 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.885532 kubelet[2072]: I0711 00:34:47.885512 2072 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.885598 kubelet[2072]: I0711 00:34:47.885589 2072 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.885704 kubelet[2072]: I0711 00:34:47.885693 2072 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.885770 kubelet[2072]: I0711 00:34:47.885760 2072 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:47.885835 kubelet[2072]: I0711 00:34:47.885826 2072 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9t5z\" (UniqueName: \"kubernetes.io/projected/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c-kube-api-access-x9t5z\") on node \"localhost\" DevicePath \"\"" Jul 11 00:34:48.676740 kubelet[2072]: W0711 00:34:48.676707 2072 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 11 00:34:48.676932 kubelet[2072]: E0711 00:34:48.676913 2072 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:34:48.677042 kubelet[2072]: W0711 00:34:48.677029 2072 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 11 00:34:48.677111 kubelet[2072]: E0711 00:34:48.677098 2072 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:34:48.677230 kubelet[2072]: W0711 00:34:48.677218 2072 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 11 00:34:48.677301 kubelet[2072]: E0711 00:34:48.677287 2072 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:34:48.677727 kubelet[2072]: W0711 00:34:48.677335 2072 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 11 00:34:48.677727 kubelet[2072]: E0711 00:34:48.677380 2072 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:34:48.692118 kubelet[2072]: I0711 00:34:48.692081 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b165f9c7-70c8-475a-9c0b-306115a4d788-xtables-lock\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.692118 kubelet[2072]: I0711 00:34:48.692121 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b165f9c7-70c8-475a-9c0b-306115a4d788-clustermesh-secrets\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.692245 kubelet[2072]: I0711 00:34:48.692138 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b165f9c7-70c8-475a-9c0b-306115a4d788-cilium-run\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.692245 kubelet[2072]: I0711 00:34:48.692155 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b165f9c7-70c8-475a-9c0b-306115a4d788-host-proc-sys-net\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.692245 kubelet[2072]: I0711 00:34:48.692170 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b165f9c7-70c8-475a-9c0b-306115a4d788-hubble-tls\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.692245 kubelet[2072]: I0711 00:34:48.692185 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b165f9c7-70c8-475a-9c0b-306115a4d788-bpf-maps\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.692245 kubelet[2072]: I0711 00:34:48.692199 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b165f9c7-70c8-475a-9c0b-306115a4d788-etc-cni-netd\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.693732 kubelet[2072]: I0711 00:34:48.692213 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b165f9c7-70c8-475a-9c0b-306115a4d788-lib-modules\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.693784 kubelet[2072]: I0711 00:34:48.693754 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dknrd\" (UniqueName: \"kubernetes.io/projected/b165f9c7-70c8-475a-9c0b-306115a4d788-kube-api-access-dknrd\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.693820 kubelet[2072]: I0711 00:34:48.693781 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b165f9c7-70c8-475a-9c0b-306115a4d788-cilium-ipsec-secrets\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.693820 kubelet[2072]: I0711 00:34:48.693803 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b165f9c7-70c8-475a-9c0b-306115a4d788-host-proc-sys-kernel\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.693871 kubelet[2072]: I0711 00:34:48.693821 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b165f9c7-70c8-475a-9c0b-306115a4d788-hostproc\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.693871 kubelet[2072]: I0711 00:34:48.693836 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b165f9c7-70c8-475a-9c0b-306115a4d788-cilium-config-path\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.693871 kubelet[2072]: I0711 00:34:48.693852 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b165f9c7-70c8-475a-9c0b-306115a4d788-cni-path\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:48.693932 kubelet[2072]: I0711 00:34:48.693874 2072 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b165f9c7-70c8-475a-9c0b-306115a4d788-cilium-cgroup\") pod \"cilium-grnwr\" (UID: \"b165f9c7-70c8-475a-9c0b-306115a4d788\") " pod="kube-system/cilium-grnwr" Jul 11 00:34:49.453444 kubelet[2072]: I0711 00:34:49.453401 2072 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f4cdbb-8898-48c0-b7c8-021a22fe2a7c" path="/var/lib/kubelet/pods/01f4cdbb-8898-48c0-b7c8-021a22fe2a7c/volumes" Jul 11 00:34:49.795584 kubelet[2072]: E0711 00:34:49.795460 2072 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 11 00:34:49.795584 kubelet[2072]: E0711 00:34:49.795557 2072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b165f9c7-70c8-475a-9c0b-306115a4d788-clustermesh-secrets podName:b165f9c7-70c8-475a-9c0b-306115a4d788 nodeName:}" failed. No retries permitted until 2025-07-11 00:34:50.295528805 +0000 UTC m=+84.950845641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/b165f9c7-70c8-475a-9c0b-306115a4d788-clustermesh-secrets") pod "cilium-grnwr" (UID: "b165f9c7-70c8-475a-9c0b-306115a4d788") : failed to sync secret cache: timed out waiting for the condition Jul 11 00:34:50.478571 kubelet[2072]: E0711 00:34:50.478518 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:50.479145 env[1319]: time="2025-07-11T00:34:50.479080973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-grnwr,Uid:b165f9c7-70c8-475a-9c0b-306115a4d788,Namespace:kube-system,Attempt:0,}" Jul 11 00:34:50.491454 env[1319]: time="2025-07-11T00:34:50.491392375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:34:50.491587 env[1319]: time="2025-07-11T00:34:50.491431535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:34:50.491587 env[1319]: time="2025-07-11T00:34:50.491442456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:34:50.491684 env[1319]: time="2025-07-11T00:34:50.491601858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6 pid=3922 runtime=io.containerd.runc.v2 Jul 11 00:34:50.510114 kubelet[2072]: E0711 00:34:50.510066 2072 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:34:50.529413 env[1319]: time="2025-07-11T00:34:50.529376995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-grnwr,Uid:b165f9c7-70c8-475a-9c0b-306115a4d788,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6\"" Jul 11 00:34:50.530214 kubelet[2072]: E0711 00:34:50.530188 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:50.533145 env[1319]: time="2025-07-11T00:34:50.533101004Z" level=info msg="CreateContainer within sandbox \"fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:34:50.542924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1723494303.mount: Deactivated successfully. Jul 11 00:34:50.545613 env[1319]: time="2025-07-11T00:34:50.545571608Z" level=info msg="CreateContainer within sandbox \"fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3c8ae7e7f7075509ea1e1ebfbde23e207a1fe6e0b8903367edcb1034bda2de50\"" Jul 11 00:34:50.546825 env[1319]: time="2025-07-11T00:34:50.546796664Z" level=info msg="StartContainer for \"3c8ae7e7f7075509ea1e1ebfbde23e207a1fe6e0b8903367edcb1034bda2de50\"" Jul 11 00:34:50.637503 env[1319]: time="2025-07-11T00:34:50.637452857Z" level=info msg="StartContainer for \"3c8ae7e7f7075509ea1e1ebfbde23e207a1fe6e0b8903367edcb1034bda2de50\" returns successfully" Jul 11 00:34:50.650709 kubelet[2072]: E0711 00:34:50.649912 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:50.667880 env[1319]: time="2025-07-11T00:34:50.667831297Z" level=info msg="shim disconnected" id=3c8ae7e7f7075509ea1e1ebfbde23e207a1fe6e0b8903367edcb1034bda2de50 Jul 11 00:34:50.667880 env[1319]: time="2025-07-11T00:34:50.667878777Z" level=warning msg="cleaning up after shim disconnected" id=3c8ae7e7f7075509ea1e1ebfbde23e207a1fe6e0b8903367edcb1034bda2de50 namespace=k8s.io Jul 11 00:34:50.667880 env[1319]: time="2025-07-11T00:34:50.667889537Z" level=info msg="cleaning up dead shim" Jul 11 00:34:50.674797 env[1319]: time="2025-07-11T00:34:50.674758108Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:34:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4006 runtime=io.containerd.runc.v2\n" Jul 11 00:34:51.529067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c8ae7e7f7075509ea1e1ebfbde23e207a1fe6e0b8903367edcb1034bda2de50-rootfs.mount: Deactivated successfully. Jul 11 00:34:51.653003 kubelet[2072]: E0711 00:34:51.652971 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:51.655256 env[1319]: time="2025-07-11T00:34:51.655218011Z" level=info msg="CreateContainer within sandbox \"fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:34:51.665054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096719797.mount: Deactivated successfully. Jul 11 00:34:51.667758 env[1319]: time="2025-07-11T00:34:51.667722335Z" level=info msg="CreateContainer within sandbox \"fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9894fe4c4253a8168fa9c0b0a5c7eaa8ed77b5023361c8d159c507b7d2bf2ddd\"" Jul 11 00:34:51.668829 env[1319]: time="2025-07-11T00:34:51.668797349Z" level=info msg="StartContainer for \"9894fe4c4253a8168fa9c0b0a5c7eaa8ed77b5023361c8d159c507b7d2bf2ddd\"" Jul 11 00:34:51.747441 env[1319]: time="2025-07-11T00:34:51.747391139Z" level=info msg="StartContainer for \"9894fe4c4253a8168fa9c0b0a5c7eaa8ed77b5023361c8d159c507b7d2bf2ddd\" returns successfully" Jul 11 00:34:51.801354 env[1319]: time="2025-07-11T00:34:51.801232804Z" level=info msg="shim disconnected" id=9894fe4c4253a8168fa9c0b0a5c7eaa8ed77b5023361c8d159c507b7d2bf2ddd Jul 11 00:34:51.801354 env[1319]: time="2025-07-11T00:34:51.801284165Z" level=warning msg="cleaning up after shim disconnected" id=9894fe4c4253a8168fa9c0b0a5c7eaa8ed77b5023361c8d159c507b7d2bf2ddd namespace=k8s.io Jul 11 00:34:51.801354 env[1319]: time="2025-07-11T00:34:51.801294685Z" level=info msg="cleaning up dead shim" Jul 11 00:34:51.808615 env[1319]: time="2025-07-11T00:34:51.808563460Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:34:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4068 runtime=io.containerd.runc.v2\n" Jul 11 00:34:52.529161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9894fe4c4253a8168fa9c0b0a5c7eaa8ed77b5023361c8d159c507b7d2bf2ddd-rootfs.mount: Deactivated successfully. Jul 11 00:34:52.657487 kubelet[2072]: E0711 00:34:52.656320 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:52.658345 env[1319]: time="2025-07-11T00:34:52.658290395Z" level=info msg="CreateContainer within sandbox \"fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:34:52.671301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3101226248.mount: Deactivated successfully. Jul 11 00:34:52.675197 env[1319]: time="2025-07-11T00:34:52.675145134Z" level=info msg="CreateContainer within sandbox \"fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c725af8038656d32c9847ea31b2676d9413771cbf0e1b261f54981b620f8ae79\"" Jul 11 00:34:52.680324 env[1319]: time="2025-07-11T00:34:52.675987385Z" level=info msg="StartContainer for \"c725af8038656d32c9847ea31b2676d9413771cbf0e1b261f54981b620f8ae79\"" Jul 11 00:34:52.727719 env[1319]: time="2025-07-11T00:34:52.727673860Z" level=info msg="StartContainer for \"c725af8038656d32c9847ea31b2676d9413771cbf0e1b261f54981b620f8ae79\" returns successfully" Jul 11 00:34:52.747103 env[1319]: time="2025-07-11T00:34:52.747043072Z" level=info msg="shim disconnected" id=c725af8038656d32c9847ea31b2676d9413771cbf0e1b261f54981b620f8ae79 Jul 11 00:34:52.747103 env[1319]: time="2025-07-11T00:34:52.747092433Z" level=warning msg="cleaning up after shim disconnected" id=c725af8038656d32c9847ea31b2676d9413771cbf0e1b261f54981b620f8ae79 namespace=k8s.io Jul 11 00:34:52.747103 env[1319]: time="2025-07-11T00:34:52.747101233Z" level=info msg="cleaning up dead shim" Jul 11 00:34:52.754185 env[1319]: time="2025-07-11T00:34:52.754152605Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:34:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4125 runtime=io.containerd.runc.v2\n" Jul 11 00:34:53.451824 kubelet[2072]: E0711 00:34:53.451778 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-qczl9" podUID="e8ee0a98-8921-45c7-83f2-d805222561c3" Jul 11 00:34:53.529143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c725af8038656d32c9847ea31b2676d9413771cbf0e1b261f54981b620f8ae79-rootfs.mount: Deactivated successfully. Jul 11 00:34:53.659772 kubelet[2072]: E0711 00:34:53.659725 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:53.662687 env[1319]: time="2025-07-11T00:34:53.662647898Z" level=info msg="CreateContainer within sandbox \"fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:34:53.680034 env[1319]: time="2025-07-11T00:34:53.679965683Z" level=info msg="CreateContainer within sandbox \"fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b9e1c296ce63ff58f4a2f520c567bd1972558a1946a136c85b6b21519025738e\"" Jul 11 00:34:53.680609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069506862.mount: Deactivated successfully. Jul 11 00:34:53.681708 env[1319]: time="2025-07-11T00:34:53.681271660Z" level=info msg="StartContainer for \"b9e1c296ce63ff58f4a2f520c567bd1972558a1946a136c85b6b21519025738e\"" Jul 11 00:34:53.727178 env[1319]: time="2025-07-11T00:34:53.727075855Z" level=info msg="StartContainer for \"b9e1c296ce63ff58f4a2f520c567bd1972558a1946a136c85b6b21519025738e\" returns successfully" Jul 11 00:34:53.745471 env[1319]: time="2025-07-11T00:34:53.745426533Z" level=info msg="shim disconnected" id=b9e1c296ce63ff58f4a2f520c567bd1972558a1946a136c85b6b21519025738e Jul 11 00:34:53.745724 env[1319]: time="2025-07-11T00:34:53.745705177Z" level=warning msg="cleaning up after shim disconnected" id=b9e1c296ce63ff58f4a2f520c567bd1972558a1946a136c85b6b21519025738e namespace=k8s.io Jul 11 00:34:53.745800 env[1319]: time="2025-07-11T00:34:53.745787618Z" level=info msg="cleaning up dead shim" Jul 11 00:34:53.751988 env[1319]: time="2025-07-11T00:34:53.751952778Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:34:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4180 runtime=io.containerd.runc.v2\n" Jul 11 00:34:54.452436 kubelet[2072]: E0711 00:34:54.452120 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:54.529280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9e1c296ce63ff58f4a2f520c567bd1972558a1946a136c85b6b21519025738e-rootfs.mount: Deactivated successfully. Jul 11 00:34:54.663890 kubelet[2072]: E0711 00:34:54.663856 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:54.666018 env[1319]: time="2025-07-11T00:34:54.665979694Z" level=info msg="CreateContainer within sandbox \"fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:34:54.677806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044921538.mount: Deactivated successfully. Jul 11 00:34:54.678576 env[1319]: time="2025-07-11T00:34:54.678532256Z" level=info msg="CreateContainer within sandbox \"fd7f321b29e590ff726b83ed7c41c9cab78d2a48f87f20dbe0cb2c1e6faa8eb6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f55b5e195d233f767990206d5f20c5c4fdb5e861f435f39d20bfbe0f18042c6f\"" Jul 11 00:34:54.680744 env[1319]: time="2025-07-11T00:34:54.679151784Z" level=info msg="StartContainer for \"f55b5e195d233f767990206d5f20c5c4fdb5e861f435f39d20bfbe0f18042c6f\"" Jul 11 00:34:54.728815 env[1319]: time="2025-07-11T00:34:54.728700065Z" level=info msg="StartContainer for \"f55b5e195d233f767990206d5f20c5c4fdb5e861f435f39d20bfbe0f18042c6f\" returns successfully" Jul 11 00:34:54.953652 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 11 00:34:55.451320 kubelet[2072]: E0711 00:34:55.451224 2072 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-qczl9" podUID="e8ee0a98-8921-45c7-83f2-d805222561c3" Jul 11 00:34:55.668769 kubelet[2072]: E0711 00:34:55.668305 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:55.683479 kubelet[2072]: I0711 00:34:55.683431 2072 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-grnwr" podStartSLOduration=7.683411818 podStartE2EDuration="7.683411818s" podCreationTimestamp="2025-07-11 00:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:34:55.683290177 +0000 UTC m=+90.338607013" watchObservedRunningTime="2025-07-11 00:34:55.683411818 +0000 UTC m=+90.338728654" Jul 11 00:34:56.670253 kubelet[2072]: E0711 00:34:56.670207 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:57.451935 kubelet[2072]: E0711 00:34:57.451901 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:57.708754 systemd-networkd[1094]: lxc_health: Link UP Jul 11 00:34:57.720672 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 11 00:34:57.718704 systemd-networkd[1094]: lxc_health: Gained carrier Jul 11 00:34:58.480531 kubelet[2072]: E0711 00:34:58.480490 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:58.674019 kubelet[2072]: E0711 00:34:58.673988 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:34:58.918826 systemd-networkd[1094]: lxc_health: Gained IPv6LL Jul 11 00:34:59.549497 systemd[1]: run-containerd-runc-k8s.io-f55b5e195d233f767990206d5f20c5c4fdb5e861f435f39d20bfbe0f18042c6f-runc.lIVCeP.mount: Deactivated successfully. Jul 11 00:34:59.675306 kubelet[2072]: E0711 00:34:59.675276 2072 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:35:01.660801 systemd[1]: run-containerd-runc-k8s.io-f55b5e195d233f767990206d5f20c5c4fdb5e861f435f39d20bfbe0f18042c6f-runc.fzo9oe.mount: Deactivated successfully. Jul 11 00:35:03.833535 sshd[3889]: pam_unix(sshd:session): session closed for user core Jul 11 00:35:03.836019 systemd[1]: sshd@24-10.0.0.84:22-10.0.0.1:33782.service: Deactivated successfully. Jul 11 00:35:03.837140 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:35:03.837571 systemd-logind[1301]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:35:03.838352 systemd-logind[1301]: Removed session 25.