Jul 2 00:42:28.717586 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 00:42:28.717606 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 1 23:37:37 -00 2024 Jul 2 00:42:28.717613 kernel: efi: EFI v2.70 by EDK II Jul 2 00:42:28.717619 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 2 00:42:28.717624 kernel: random: crng init done Jul 2 00:42:28.717629 kernel: ACPI: Early table checksum verification disabled Jul 2 00:42:28.717636 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 2 00:42:28.717642 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 00:42:28.717648 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:42:28.717653 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:42:28.717658 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:42:28.717664 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:42:28.717669 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:42:28.717675 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:42:28.717683 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:42:28.717688 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:42:28.717694 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:42:28.717700 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 00:42:28.717706 kernel: NUMA: Failed to initialise from firmware Jul 2 00:42:28.717711 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:42:28.717717 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 2 00:42:28.717722 kernel: Zone ranges: Jul 2 00:42:28.717728 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:42:28.717739 kernel: DMA32 empty Jul 2 00:42:28.717745 kernel: Normal empty Jul 2 00:42:28.717750 kernel: Movable zone start for each node Jul 2 00:42:28.717756 kernel: Early memory node ranges Jul 2 00:42:28.717761 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 2 00:42:28.717767 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 2 00:42:28.717772 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 2 00:42:28.717778 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 2 00:42:28.717783 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 2 00:42:28.717789 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 2 00:42:28.717795 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 2 00:42:28.717801 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:42:28.717808 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 00:42:28.717813 kernel: psci: probing for conduit method from ACPI. Jul 2 00:42:28.717819 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 00:42:28.717825 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:42:28.717830 kernel: psci: Trusted OS migration not required Jul 2 00:42:28.717839 kernel: psci: SMC Calling Convention v1.1 Jul 2 00:42:28.717852 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 00:42:28.717860 kernel: ACPI: SRAT not present Jul 2 00:42:28.717866 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Jul 2 00:42:28.717872 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Jul 2 00:42:28.717879 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 00:42:28.717885 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:42:28.717891 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:42:28.717897 kernel: CPU features: detected: Hardware dirty bit management Jul 2 00:42:28.717903 kernel: CPU features: detected: Spectre-v4 Jul 2 00:42:28.717909 kernel: CPU features: detected: Spectre-BHB Jul 2 00:42:28.717916 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 00:42:28.717922 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 00:42:28.717928 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 00:42:28.717934 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 00:42:28.717940 kernel: Policy zone: DMA Jul 2 00:42:28.717947 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:42:28.717953 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:42:28.717959 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:42:28.717966 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:42:28.717972 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:42:28.717978 kernel: Memory: 2457468K/2572288K available (9792K kernel code, 2092K rwdata, 7572K rodata, 36352K init, 777K bss, 114820K reserved, 0K cma-reserved) Jul 2 00:42:28.717986 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:42:28.717992 kernel: trace event string verifier disabled Jul 2 00:42:28.717998 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:42:28.718004 kernel: rcu: RCU event tracing is enabled. Jul 2 00:42:28.718011 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:42:28.718017 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:42:28.718023 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:42:28.718029 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:42:28.718035 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:42:28.718041 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:42:28.718047 kernel: GICv3: 256 SPIs implemented Jul 2 00:42:28.718054 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:42:28.718060 kernel: GICv3: Distributor has no Range Selector support Jul 2 00:42:28.718066 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:42:28.718072 kernel: GICv3: 16 PPIs implemented Jul 2 00:42:28.718078 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 00:42:28.718084 kernel: ACPI: SRAT not present Jul 2 00:42:28.718090 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 00:42:28.718096 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 00:42:28.718102 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 2 00:42:28.718108 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 2 00:42:28.718114 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 2 00:42:28.718120 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:42:28.718134 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 00:42:28.718141 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 00:42:28.718148 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 00:42:28.718154 kernel: arm-pv: using stolen time PV Jul 2 00:42:28.718160 kernel: Console: colour dummy device 80x25 Jul 2 00:42:28.718166 kernel: ACPI: Core revision 20210730 Jul 2 00:42:28.718172 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 00:42:28.718178 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:42:28.718185 kernel: LSM: Security Framework initializing Jul 2 00:42:28.718191 kernel: SELinux: Initializing. Jul 2 00:42:28.718198 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:42:28.718205 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:42:28.718211 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:42:28.718217 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 00:42:28.718223 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 00:42:28.718229 kernel: Remapping and enabling EFI services. Jul 2 00:42:28.718235 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:42:28.718241 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:42:28.718248 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 00:42:28.718255 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 2 00:42:28.718262 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:42:28.718268 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 00:42:28.718274 kernel: Detected PIPT I-cache on CPU2 Jul 2 00:42:28.718280 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 00:42:28.718287 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 2 00:42:28.718293 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:42:28.718299 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 00:42:28.718305 kernel: Detected PIPT I-cache on CPU3 Jul 2 00:42:28.718311 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 00:42:28.718319 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 2 00:42:28.718325 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:42:28.718331 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 00:42:28.718337 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:42:28.718347 kernel: SMP: Total of 4 processors activated. Jul 2 00:42:28.718355 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:42:28.718362 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 00:42:28.718371 kernel: CPU features: detected: Common not Private translations Jul 2 00:42:28.718378 kernel: CPU features: detected: CRC32 instructions Jul 2 00:42:28.718385 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 00:42:28.718391 kernel: CPU features: detected: LSE atomic instructions Jul 2 00:42:28.718398 kernel: CPU features: detected: Privileged Access Never Jul 2 00:42:28.718406 kernel: CPU features: detected: RAS Extension Support Jul 2 00:42:28.718412 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 00:42:28.718419 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:42:28.718426 kernel: alternatives: patching kernel code Jul 2 00:42:28.718434 kernel: devtmpfs: initialized Jul 2 00:42:28.718440 kernel: KASLR enabled Jul 2 00:42:28.718447 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:42:28.718453 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:42:28.718460 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:42:28.718466 kernel: SMBIOS 3.0.0 present. Jul 2 00:42:28.718473 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 2 00:42:28.718479 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:42:28.718485 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:42:28.718492 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:42:28.718500 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:42:28.718508 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:42:28.718515 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Jul 2 00:42:28.718522 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:42:28.718528 kernel: cpuidle: using governor menu Jul 2 00:42:28.718535 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:42:28.718541 kernel: ASID allocator initialised with 32768 entries Jul 2 00:42:28.718547 kernel: ACPI: bus type PCI registered Jul 2 00:42:28.718556 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:42:28.718564 kernel: Serial: AMBA PL011 UART driver Jul 2 00:42:28.718570 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:42:28.718577 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:42:28.718583 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:42:28.718590 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:42:28.718598 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:42:28.718605 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:42:28.718611 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:42:28.718618 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:42:28.718626 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:42:28.718632 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:42:28.718641 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 00:42:28.718647 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 00:42:28.718654 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 00:42:28.718662 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:42:28.718669 kernel: ACPI: Interpreter enabled Jul 2 00:42:28.718676 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:42:28.718684 kernel: ACPI: MCFG table detected, 1 entries Jul 2 00:42:28.718692 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 00:42:28.718698 kernel: printk: console [ttyAMA0] enabled Jul 2 00:42:28.718705 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:42:28.718893 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:42:28.718983 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 00:42:28.719056 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 00:42:28.719136 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 00:42:28.719227 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 00:42:28.719237 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 00:42:28.719243 kernel: PCI host bridge to bus 0000:00 Jul 2 00:42:28.719313 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 00:42:28.719368 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 00:42:28.719421 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 00:42:28.719474 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:42:28.719548 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 00:42:28.719618 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:42:28.719680 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 00:42:28.719740 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 00:42:28.719806 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:42:28.719875 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:42:28.719937 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 00:42:28.720000 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 00:42:28.720055 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 00:42:28.720108 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 00:42:28.720172 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 00:42:28.720181 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 00:42:28.720188 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 00:42:28.720195 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 00:42:28.720204 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 00:42:28.720211 kernel: iommu: Default domain type: Translated Jul 2 00:42:28.720217 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:42:28.720224 kernel: vgaarb: loaded Jul 2 00:42:28.720230 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:42:28.720237 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:42:28.720244 kernel: PTP clock support registered Jul 2 00:42:28.720250 kernel: Registered efivars operations Jul 2 00:42:28.720257 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:42:28.720263 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:42:28.720271 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:42:28.720278 kernel: pnp: PnP ACPI init Jul 2 00:42:28.720349 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 00:42:28.720362 kernel: pnp: PnP ACPI: found 1 devices Jul 2 00:42:28.720369 kernel: NET: Registered PF_INET protocol family Jul 2 00:42:28.720376 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:42:28.720384 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:42:28.720391 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:42:28.720400 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:42:28.720407 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 00:42:28.720413 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:42:28.720420 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:42:28.720427 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:42:28.720433 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:42:28.720440 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:42:28.720446 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 00:42:28.720454 kernel: kvm [1]: HYP mode not available Jul 2 00:42:28.720460 kernel: Initialise system trusted keyrings Jul 2 00:42:28.720467 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:42:28.720473 kernel: Key type asymmetric registered Jul 2 00:42:28.720480 kernel: Asymmetric key parser 'x509' registered Jul 2 00:42:28.720486 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 00:42:28.720493 kernel: io scheduler mq-deadline registered Jul 2 00:42:28.720499 kernel: io scheduler kyber registered Jul 2 00:42:28.720506 kernel: io scheduler bfq registered Jul 2 00:42:28.720512 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 00:42:28.720520 kernel: ACPI: button: Power Button [PWRB] Jul 2 00:42:28.720527 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 00:42:28.720648 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 00:42:28.720659 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:42:28.720666 kernel: thunder_xcv, ver 1.0 Jul 2 00:42:28.720672 kernel: thunder_bgx, ver 1.0 Jul 2 00:42:28.720679 kernel: nicpf, ver 1.0 Jul 2 00:42:28.720685 kernel: nicvf, ver 1.0 Jul 2 00:42:28.720761 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:42:28.720822 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:42:28 UTC (1719880948) Jul 2 00:42:28.720831 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:42:28.720838 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:42:28.720851 kernel: Segment Routing with IPv6 Jul 2 00:42:28.720858 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:42:28.720865 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:42:28.720871 kernel: Key type dns_resolver registered Jul 2 00:42:28.720877 kernel: registered taskstats version 1 Jul 2 00:42:28.720886 kernel: Loading compiled-in X.509 certificates Jul 2 00:42:28.720892 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: c418313b450e4055b23e41c11cb6dc415de0265d' Jul 2 00:42:28.720899 kernel: Key type .fscrypt registered Jul 2 00:42:28.720905 kernel: Key type fscrypt-provisioning registered Jul 2 00:42:28.720912 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:42:28.720918 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:42:28.720925 kernel: ima: No architecture policies found Jul 2 00:42:28.720931 kernel: clk: Disabling unused clocks Jul 2 00:42:28.720937 kernel: Freeing unused kernel memory: 36352K Jul 2 00:42:28.720945 kernel: Run /init as init process Jul 2 00:42:28.720952 kernel: with arguments: Jul 2 00:42:28.720958 kernel: /init Jul 2 00:42:28.720964 kernel: with environment: Jul 2 00:42:28.720971 kernel: HOME=/ Jul 2 00:42:28.720977 kernel: TERM=linux Jul 2 00:42:28.720983 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:42:28.720991 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:42:28.721001 systemd[1]: Detected virtualization kvm. Jul 2 00:42:28.721009 systemd[1]: Detected architecture arm64. Jul 2 00:42:28.721015 systemd[1]: Running in initrd. Jul 2 00:42:28.721022 systemd[1]: No hostname configured, using default hostname. Jul 2 00:42:28.721029 systemd[1]: Hostname set to . Jul 2 00:42:28.721037 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:42:28.721043 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:42:28.721051 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:42:28.721059 systemd[1]: Reached target cryptsetup.target. Jul 2 00:42:28.721066 systemd[1]: Reached target paths.target. Jul 2 00:42:28.721072 systemd[1]: Reached target slices.target. Jul 2 00:42:28.721079 systemd[1]: Reached target swap.target. Jul 2 00:42:28.721086 systemd[1]: Reached target timers.target. Jul 2 00:42:28.721093 systemd[1]: Listening on iscsid.socket. Jul 2 00:42:28.721100 systemd[1]: Listening on iscsiuio.socket. Jul 2 00:42:28.721108 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 00:42:28.721116 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 00:42:28.721123 systemd[1]: Listening on systemd-journald.socket. Jul 2 00:42:28.721139 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:42:28.721146 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:42:28.721153 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:42:28.721160 systemd[1]: Reached target sockets.target. Jul 2 00:42:28.721167 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:42:28.721174 systemd[1]: Finished network-cleanup.service. Jul 2 00:42:28.721182 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:42:28.721189 systemd[1]: Starting systemd-journald.service... Jul 2 00:42:28.721196 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:42:28.721203 systemd[1]: Starting systemd-resolved.service... Jul 2 00:42:28.721210 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 00:42:28.721217 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:42:28.721223 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:42:28.721230 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 00:42:28.721237 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 00:42:28.721246 kernel: audit: type=1130 audit(1719880948.720:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.721253 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 00:42:28.721264 systemd-journald[290]: Journal started Jul 2 00:42:28.721306 systemd-journald[290]: Runtime Journal (/run/log/journal/7b1c4fbe64f04d7c99218c7139018e88) is 6.0M, max 48.7M, 42.6M free. Jul 2 00:42:28.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.714890 systemd-modules-load[291]: Inserted module 'overlay' Jul 2 00:42:28.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.726185 kernel: audit: type=1130 audit(1719880948.723:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.726208 systemd[1]: Started systemd-journald.service. Jul 2 00:42:28.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.727149 kernel: audit: type=1130 audit(1719880948.726:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.727545 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 00:42:28.736146 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:42:28.737153 systemd-resolved[292]: Positive Trust Anchors: Jul 2 00:42:28.737166 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:42:28.737193 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:42:28.744501 kernel: Bridge firewalling registered Jul 2 00:42:28.740316 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 2 00:42:28.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.741266 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 2 00:42:28.748316 kernel: audit: type=1130 audit(1719880948.744:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.742047 systemd[1]: Started systemd-resolved.service. Jul 2 00:42:28.745098 systemd[1]: Reached target nss-lookup.target. Jul 2 00:42:28.752609 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 00:42:28.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.755548 systemd[1]: Starting dracut-cmdline.service... Jul 2 00:42:28.757162 kernel: audit: type=1130 audit(1719880948.753:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.757187 kernel: SCSI subsystem initialized Jul 2 00:42:28.764201 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:42:28.764229 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:42:28.764489 dracut-cmdline[307]: dracut-dracut-053 Jul 2 00:42:28.765501 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 00:42:28.766667 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:42:28.770068 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 2 00:42:28.770960 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:42:28.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.772934 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:42:28.774524 kernel: audit: type=1130 audit(1719880948.772:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.780730 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:42:28.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.784163 kernel: audit: type=1130 audit(1719880948.781:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.827153 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:42:28.840182 kernel: iscsi: registered transport (tcp) Jul 2 00:42:28.855153 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:42:28.855183 kernel: QLogic iSCSI HBA Driver Jul 2 00:42:28.889942 systemd[1]: Finished dracut-cmdline.service. Jul 2 00:42:28.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.891405 systemd[1]: Starting dracut-pre-udev.service... Jul 2 00:42:28.893659 kernel: audit: type=1130 audit(1719880948.890:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:28.937157 kernel: raid6: neonx8 gen() 13760 MB/s Jul 2 00:42:28.954152 kernel: raid6: neonx8 xor() 10807 MB/s Jul 2 00:42:28.971149 kernel: raid6: neonx4 gen() 13214 MB/s Jul 2 00:42:28.988153 kernel: raid6: neonx4 xor() 9796 MB/s Jul 2 00:42:29.005147 kernel: raid6: neonx2 gen() 12861 MB/s Jul 2 00:42:29.022143 kernel: raid6: neonx2 xor() 10222 MB/s Jul 2 00:42:29.039144 kernel: raid6: neonx1 gen() 10480 MB/s Jul 2 00:42:29.056142 kernel: raid6: neonx1 xor() 8749 MB/s Jul 2 00:42:29.080191 kernel: raid6: int64x8 gen() 6213 MB/s Jul 2 00:42:29.090149 kernel: raid6: int64x8 xor() 3503 MB/s Jul 2 00:42:29.107158 kernel: raid6: int64x4 gen() 7166 MB/s Jul 2 00:42:29.124149 kernel: raid6: int64x4 xor() 3853 MB/s Jul 2 00:42:29.141147 kernel: raid6: int64x2 gen() 6095 MB/s Jul 2 00:42:29.158150 kernel: raid6: int64x2 xor() 3317 MB/s Jul 2 00:42:29.175149 kernel: raid6: int64x1 gen() 5009 MB/s Jul 2 00:42:29.192500 kernel: raid6: int64x1 xor() 2640 MB/s Jul 2 00:42:29.192531 kernel: raid6: using algorithm neonx8 gen() 13760 MB/s Jul 2 00:42:29.192549 kernel: raid6: .... xor() 10807 MB/s, rmw enabled Jul 2 00:42:29.192565 kernel: raid6: using neon recovery algorithm Jul 2 00:42:29.206187 kernel: xor: measuring software checksum speed Jul 2 00:42:29.206212 kernel: 8regs : 17315 MB/sec Jul 2 00:42:29.207147 kernel: 32regs : 20755 MB/sec Jul 2 00:42:29.208290 kernel: arm64_neon : 27863 MB/sec Jul 2 00:42:29.208302 kernel: xor: using function: arm64_neon (27863 MB/sec) Jul 2 00:42:29.262155 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 2 00:42:29.274320 systemd[1]: Finished dracut-pre-udev.service. Jul 2 00:42:29.277156 kernel: audit: type=1130 audit(1719880949.274:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:29.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:29.277000 audit: BPF prog-id=7 op=LOAD Jul 2 00:42:29.277000 audit: BPF prog-id=8 op=LOAD Jul 2 00:42:29.277886 systemd[1]: Starting systemd-udevd.service... Jul 2 00:42:29.292967 systemd-udevd[490]: Using default interface naming scheme 'v252'. Jul 2 00:42:29.296407 systemd[1]: Started systemd-udevd.service. Jul 2 00:42:29.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:29.298246 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 00:42:29.308775 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Jul 2 00:42:29.335490 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 00:42:29.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:29.336900 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:42:29.370539 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:42:29.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:29.410158 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:42:29.412338 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:42:29.412373 kernel: GPT:9289727 != 19775487 Jul 2 00:42:29.412382 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:42:29.412392 kernel: GPT:9289727 != 19775487 Jul 2 00:42:29.413308 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:42:29.414162 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:42:29.429156 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (544) Jul 2 00:42:29.432810 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 00:42:29.433985 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 00:42:29.441771 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 00:42:29.445569 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 00:42:29.449519 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 00:42:29.451020 systemd[1]: Starting disk-uuid.service... Jul 2 00:42:29.456804 disk-uuid[562]: Primary Header is updated. Jul 2 00:42:29.456804 disk-uuid[562]: Secondary Entries is updated. Jul 2 00:42:29.456804 disk-uuid[562]: Secondary Header is updated. Jul 2 00:42:29.459222 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:42:30.482188 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:42:30.482385 disk-uuid[563]: The operation has completed successfully. Jul 2 00:42:30.510681 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:42:30.511730 systemd[1]: Finished disk-uuid.service. Jul 2 00:42:30.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.513896 systemd[1]: Starting verity-setup.service... Jul 2 00:42:30.538240 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:42:30.570230 systemd[1]: Found device dev-mapper-usr.device. Jul 2 00:42:30.572366 systemd[1]: Mounting sysusr-usr.mount... Jul 2 00:42:30.574590 systemd[1]: Finished verity-setup.service. Jul 2 00:42:30.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.628103 systemd[1]: Mounted sysusr-usr.mount. Jul 2 00:42:30.629084 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 00:42:30.628724 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 00:42:30.629465 systemd[1]: Starting ignition-setup.service... Jul 2 00:42:30.631105 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 00:42:30.639374 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:42:30.639414 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:42:30.639424 kernel: BTRFS info (device vda6): has skinny extents Jul 2 00:42:30.649324 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:42:30.700349 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 00:42:30.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.701000 audit: BPF prog-id=9 op=LOAD Jul 2 00:42:30.702229 systemd[1]: Starting systemd-networkd.service... Jul 2 00:42:30.725955 systemd-networkd[732]: lo: Link UP Jul 2 00:42:30.725967 systemd-networkd[732]: lo: Gained carrier Jul 2 00:42:30.726349 systemd-networkd[732]: Enumeration completed Jul 2 00:42:30.726524 systemd-networkd[732]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:42:30.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.726630 systemd[1]: Started systemd-networkd.service. Jul 2 00:42:30.727335 systemd[1]: Reached target network.target. Jul 2 00:42:30.729027 systemd[1]: Starting iscsiuio.service... Jul 2 00:42:30.729427 systemd-networkd[732]: eth0: Link UP Jul 2 00:42:30.729431 systemd-networkd[732]: eth0: Gained carrier Jul 2 00:42:30.738420 systemd[1]: Finished ignition-setup.service. Jul 2 00:42:30.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.740056 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 00:42:30.743360 systemd[1]: Started iscsiuio.service. Jul 2 00:42:30.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.745091 systemd[1]: Starting iscsid.service... Jul 2 00:42:30.750013 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:42:30.750013 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 00:42:30.750013 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 00:42:30.750013 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 00:42:30.750013 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:42:30.750013 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 00:42:30.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.752430 systemd-networkd[732]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:42:30.754900 systemd[1]: Started iscsid.service. Jul 2 00:42:30.758089 systemd[1]: Starting dracut-initqueue.service... Jul 2 00:42:30.771690 systemd[1]: Finished dracut-initqueue.service. Jul 2 00:42:30.772458 systemd[1]: Reached target remote-fs-pre.target. Jul 2 00:42:30.773380 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:42:30.774568 systemd[1]: Reached target remote-fs.target. Jul 2 00:42:30.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.776483 systemd[1]: Starting dracut-pre-mount.service... Jul 2 00:42:30.785237 systemd[1]: Finished dracut-pre-mount.service. Jul 2 00:42:30.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.832186 ignition[737]: Ignition 2.14.0 Jul 2 00:42:30.832196 ignition[737]: Stage: fetch-offline Jul 2 00:42:30.832238 ignition[737]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:42:30.832253 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:42:30.832393 ignition[737]: parsed url from cmdline: "" Jul 2 00:42:30.832396 ignition[737]: no config URL provided Jul 2 00:42:30.832401 ignition[737]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:42:30.832408 ignition[737]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:42:30.832428 ignition[737]: op(1): [started] loading QEMU firmware config module Jul 2 00:42:30.832432 ignition[737]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:42:30.835761 ignition[737]: op(1): [finished] loading QEMU firmware config module Jul 2 00:42:30.874940 ignition[737]: parsing config with SHA512: 246d5ad1fff4932badfae664627266a0b1d0827ee0bce5656a4401566fdcef4a3cf7749c839ac1ce9a77f2866373597252b2d2bc34a7988cdccad3e8f3569eb3 Jul 2 00:42:30.881911 unknown[737]: fetched base config from "system" Jul 2 00:42:30.881926 unknown[737]: fetched user config from "qemu" Jul 2 00:42:30.882499 ignition[737]: fetch-offline: fetch-offline passed Jul 2 00:42:30.882567 ignition[737]: Ignition finished successfully Jul 2 00:42:30.883904 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 00:42:30.884960 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:42:30.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.885801 systemd[1]: Starting ignition-kargs.service... Jul 2 00:42:30.894827 ignition[760]: Ignition 2.14.0 Jul 2 00:42:30.894838 ignition[760]: Stage: kargs Jul 2 00:42:30.894955 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:42:30.894965 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:42:30.895913 ignition[760]: kargs: kargs passed Jul 2 00:42:30.897602 systemd[1]: Finished ignition-kargs.service. Jul 2 00:42:30.895962 ignition[760]: Ignition finished successfully Jul 2 00:42:30.899640 systemd[1]: Starting ignition-disks.service... Jul 2 00:42:30.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.908352 ignition[766]: Ignition 2.14.0 Jul 2 00:42:30.908361 ignition[766]: Stage: disks Jul 2 00:42:30.908472 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:42:30.908483 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:42:30.909458 ignition[766]: disks: disks passed Jul 2 00:42:30.909507 ignition[766]: Ignition finished successfully Jul 2 00:42:30.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.911758 systemd[1]: Finished ignition-disks.service. Jul 2 00:42:30.912866 systemd[1]: Reached target initrd-root-device.target. Jul 2 00:42:30.913789 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:42:30.914796 systemd[1]: Reached target local-fs.target. Jul 2 00:42:30.915924 systemd[1]: Reached target sysinit.target. Jul 2 00:42:30.917168 systemd[1]: Reached target basic.target. Jul 2 00:42:30.919192 systemd[1]: Starting systemd-fsck-root.service... Jul 2 00:42:30.932662 systemd-fsck[774]: ROOT: clean, 614/553520 files, 56019/553472 blocks Jul 2 00:42:30.937936 systemd[1]: Finished systemd-fsck-root.service. Jul 2 00:42:30.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:30.939743 systemd[1]: Mounting sysroot.mount... Jul 2 00:42:30.947149 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 00:42:30.948361 systemd[1]: Mounted sysroot.mount. Jul 2 00:42:30.948970 systemd[1]: Reached target initrd-root-fs.target. Jul 2 00:42:30.950995 systemd[1]: Mounting sysroot-usr.mount... Jul 2 00:42:30.951748 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 00:42:30.951784 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:42:30.951807 systemd[1]: Reached target ignition-diskful.target. Jul 2 00:42:30.954091 systemd[1]: Mounted sysroot-usr.mount. Jul 2 00:42:30.955999 systemd[1]: Starting initrd-setup-root.service... Jul 2 00:42:30.960585 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:42:30.966982 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:42:30.970956 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:42:30.974783 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:42:31.002367 systemd[1]: Finished initrd-setup-root.service. Jul 2 00:42:31.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:31.003864 systemd[1]: Starting ignition-mount.service... Jul 2 00:42:31.005076 systemd[1]: Starting sysroot-boot.service... Jul 2 00:42:31.010161 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 00:42:31.018369 ignition[827]: INFO : Ignition 2.14.0 Jul 2 00:42:31.018369 ignition[827]: INFO : Stage: mount Jul 2 00:42:31.019591 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:42:31.019591 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:42:31.021045 ignition[827]: INFO : mount: mount passed Jul 2 00:42:31.021045 ignition[827]: INFO : Ignition finished successfully Jul 2 00:42:31.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:31.021062 systemd[1]: Finished ignition-mount.service. Jul 2 00:42:31.025096 systemd[1]: Finished sysroot-boot.service. Jul 2 00:42:31.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:31.587822 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 00:42:31.593147 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (835) Jul 2 00:42:31.595164 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:42:31.595181 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:42:31.595190 kernel: BTRFS info (device vda6): has skinny extents Jul 2 00:42:31.597955 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 00:42:31.599290 systemd[1]: Starting ignition-files.service... Jul 2 00:42:31.613411 ignition[855]: INFO : Ignition 2.14.0 Jul 2 00:42:31.613411 ignition[855]: INFO : Stage: files Jul 2 00:42:31.614610 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:42:31.614610 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:42:31.614610 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:42:31.620861 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:42:31.620861 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:42:31.623154 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:42:31.624051 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:42:31.624051 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:42:31.623675 unknown[855]: wrote ssh authorized keys file for user: core Jul 2 00:42:31.626811 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:42:31.626811 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 00:42:31.647608 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:42:31.700235 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:42:31.701772 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:42:31.703060 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 00:42:31.918296 systemd-networkd[732]: eth0: Gained IPv6LL Jul 2 00:42:31.997827 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 00:42:32.106574 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:42:32.106574 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:42:32.109272 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 00:42:32.313526 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 00:42:32.558788 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:42:32.558788 ignition[855]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 2 00:42:32.561911 ignition[855]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:42:32.561911 ignition[855]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:42:32.561911 ignition[855]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 2 00:42:32.561911 ignition[855]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 2 00:42:32.561911 ignition[855]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:42:32.561911 ignition[855]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:42:32.561911 ignition[855]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 2 00:42:32.561911 ignition[855]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:42:32.561911 ignition[855]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:42:32.561911 ignition[855]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:42:32.561911 ignition[855]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:42:32.593921 ignition[855]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:42:32.596057 ignition[855]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:42:32.596057 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:42:32.596057 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:42:32.596057 ignition[855]: INFO : files: files passed Jul 2 00:42:32.596057 ignition[855]: INFO : Ignition finished successfully Jul 2 00:42:32.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.596147 systemd[1]: Finished ignition-files.service. Jul 2 00:42:32.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.598835 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 00:42:32.605498 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 2 00:42:32.599965 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 00:42:32.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.608732 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:42:32.600611 systemd[1]: Starting ignition-quench.service... Jul 2 00:42:32.603212 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:42:32.603301 systemd[1]: Finished ignition-quench.service. Jul 2 00:42:32.606082 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 00:42:32.607409 systemd[1]: Reached target ignition-complete.target. Jul 2 00:42:32.609922 systemd[1]: Starting initrd-parse-etc.service... Jul 2 00:42:32.622464 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:42:32.622558 systemd[1]: Finished initrd-parse-etc.service. Jul 2 00:42:32.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.623949 systemd[1]: Reached target initrd-fs.target. Jul 2 00:42:32.624970 systemd[1]: Reached target initrd.target. Jul 2 00:42:32.625941 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 00:42:32.626889 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 00:42:32.638724 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 00:42:32.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.640297 systemd[1]: Starting initrd-cleanup.service... Jul 2 00:42:32.648645 systemd[1]: Stopped target nss-lookup.target. Jul 2 00:42:32.649382 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 00:42:32.650431 systemd[1]: Stopped target timers.target. Jul 2 00:42:32.651508 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:42:32.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.651622 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 00:42:32.652620 systemd[1]: Stopped target initrd.target. Jul 2 00:42:32.653641 systemd[1]: Stopped target basic.target. Jul 2 00:42:32.654720 systemd[1]: Stopped target ignition-complete.target. Jul 2 00:42:32.655788 systemd[1]: Stopped target ignition-diskful.target. Jul 2 00:42:32.656789 systemd[1]: Stopped target initrd-root-device.target. Jul 2 00:42:32.657871 systemd[1]: Stopped target remote-fs.target. Jul 2 00:42:32.658853 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 00:42:32.659918 systemd[1]: Stopped target sysinit.target. Jul 2 00:42:32.660906 systemd[1]: Stopped target local-fs.target. Jul 2 00:42:32.661921 systemd[1]: Stopped target local-fs-pre.target. Jul 2 00:42:32.662915 systemd[1]: Stopped target swap.target. Jul 2 00:42:32.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.663793 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:42:32.663901 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 00:42:32.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.664881 systemd[1]: Stopped target cryptsetup.target. Jul 2 00:42:32.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.665731 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:42:32.665831 systemd[1]: Stopped dracut-initqueue.service. Jul 2 00:42:32.666900 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:42:32.666990 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 00:42:32.667920 systemd[1]: Stopped target paths.target. Jul 2 00:42:32.668764 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:42:32.673184 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 00:42:32.673958 systemd[1]: Stopped target slices.target. Jul 2 00:42:32.675776 systemd[1]: Stopped target sockets.target. Jul 2 00:42:32.676762 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:42:32.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.676880 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 00:42:32.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.677964 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:42:32.678057 systemd[1]: Stopped ignition-files.service. Jul 2 00:42:32.681459 iscsid[739]: iscsid shutting down. Jul 2 00:42:32.680156 systemd[1]: Stopping ignition-mount.service... Jul 2 00:42:32.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.681172 systemd[1]: Stopping iscsid.service... Jul 2 00:42:32.681828 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:42:32.681933 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 00:42:32.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.683582 systemd[1]: Stopping sysroot-boot.service... Jul 2 00:42:32.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.684424 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:42:32.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.690001 ignition[896]: INFO : Ignition 2.14.0 Jul 2 00:42:32.690001 ignition[896]: INFO : Stage: umount Jul 2 00:42:32.690001 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:42:32.690001 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:42:32.690001 ignition[896]: INFO : umount: umount passed Jul 2 00:42:32.690001 ignition[896]: INFO : Ignition finished successfully Jul 2 00:42:32.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.684541 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 00:42:32.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.685730 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:42:32.685818 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 00:42:32.688164 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 00:42:32.688265 systemd[1]: Stopped iscsid.service. Jul 2 00:42:32.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.689677 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:42:32.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.689743 systemd[1]: Closed iscsid.socket. Jul 2 00:42:32.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.691580 systemd[1]: Stopping iscsiuio.service... Jul 2 00:42:32.692801 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:42:32.692891 systemd[1]: Finished initrd-cleanup.service. Jul 2 00:42:32.694260 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:42:32.694333 systemd[1]: Stopped ignition-mount.service. Jul 2 00:42:32.695959 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:42:32.696338 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 00:42:32.696408 systemd[1]: Stopped iscsiuio.service. Jul 2 00:42:32.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.698658 systemd[1]: Stopped target network.target. Jul 2 00:42:32.700213 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:42:32.700250 systemd[1]: Closed iscsiuio.socket. Jul 2 00:42:32.701204 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:42:32.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.701244 systemd[1]: Stopped ignition-disks.service. Jul 2 00:42:32.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.702477 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:42:32.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.702513 systemd[1]: Stopped ignition-kargs.service. Jul 2 00:42:32.703573 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:42:32.703608 systemd[1]: Stopped ignition-setup.service. Jul 2 00:42:32.704875 systemd[1]: Stopping systemd-networkd.service... Jul 2 00:42:32.705849 systemd[1]: Stopping systemd-resolved.service... Jul 2 00:42:32.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.709167 systemd-networkd[732]: eth0: DHCPv6 lease lost Jul 2 00:42:32.726000 audit: BPF prog-id=9 op=UNLOAD Jul 2 00:42:32.710234 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:42:32.710327 systemd[1]: Stopped systemd-networkd.service. Jul 2 00:42:32.712033 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:42:32.712061 systemd[1]: Closed systemd-networkd.socket. Jul 2 00:42:32.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.731000 audit: BPF prog-id=6 op=UNLOAD Jul 2 00:42:32.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.713498 systemd[1]: Stopping network-cleanup.service... Jul 2 00:42:32.714600 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:42:32.714656 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 00:42:32.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.716386 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:42:32.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.716425 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:42:32.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.718352 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:42:32.718393 systemd[1]: Stopped systemd-modules-load.service. Jul 2 00:42:32.719324 systemd[1]: Stopping systemd-udevd.service... Jul 2 00:42:32.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.723778 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 00:42:32.724786 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:42:32.724910 systemd[1]: Stopped systemd-resolved.service. Jul 2 00:42:32.726866 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:42:32.726952 systemd[1]: Stopped network-cleanup.service. Jul 2 00:42:32.730243 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:42:32.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.730359 systemd[1]: Stopped systemd-udevd.service. Jul 2 00:42:32.732307 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:42:32.732343 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 00:42:32.733273 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:42:32.733304 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 00:42:32.734337 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:42:32.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:32.734379 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 00:42:32.735596 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:42:32.735628 systemd[1]: Stopped dracut-cmdline.service. Jul 2 00:42:32.736675 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:42:32.736715 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 00:42:32.738682 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 00:42:32.739932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:42:32.739984 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 00:42:32.744091 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:42:32.744194 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 00:42:32.746257 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:42:32.746339 systemd[1]: Stopped sysroot-boot.service. Jul 2 00:42:32.747438 systemd[1]: Reached target initrd-switch-root.target. Jul 2 00:42:32.748495 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:42:32.748535 systemd[1]: Stopped initrd-setup-root.service. Jul 2 00:42:32.750169 systemd[1]: Starting initrd-switch-root.service... Jul 2 00:42:32.756186 systemd[1]: Switching root. Jul 2 00:42:32.768882 systemd-journald[290]: Journal stopped Jul 2 00:42:34.831476 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Jul 2 00:42:34.831543 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 00:42:34.831557 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 00:42:34.831567 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 00:42:34.831577 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:42:34.831587 kernel: SELinux: policy capability open_perms=1 Jul 2 00:42:34.831597 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:42:34.831609 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:42:34.831618 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:42:34.831627 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:42:34.831637 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:42:34.831647 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:42:34.831657 systemd[1]: Successfully loaded SELinux policy in 38.602ms. Jul 2 00:42:34.831676 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.558ms. Jul 2 00:42:34.831688 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:42:34.831700 systemd[1]: Detected virtualization kvm. Jul 2 00:42:34.831711 systemd[1]: Detected architecture arm64. Jul 2 00:42:34.831721 systemd[1]: Detected first boot. Jul 2 00:42:34.831732 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:42:34.831747 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 00:42:34.831758 kernel: kauditd_printk_skb: 71 callbacks suppressed Jul 2 00:42:34.831792 kernel: audit: type=1400 audit(1719880953.029:82): avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 00:42:34.831805 kernel: audit: type=1300 audit(1719880953.029:82): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58c4 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:42:34.831815 kernel: audit: type=1327 audit(1719880953.029:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:42:34.831825 kernel: audit: type=1400 audit(1719880953.031:83): avc: denied { associate } for pid=930 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 00:42:34.831837 kernel: audit: type=1300 audit(1719880953.031:83): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c59a9 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:42:34.831856 kernel: audit: type=1307 audit(1719880953.031:83): cwd="/" Jul 2 00:42:34.831867 kernel: audit: type=1302 audit(1719880953.031:83): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:42:34.831877 kernel: audit: type=1302 audit(1719880953.031:83): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:42:34.831888 kernel: audit: type=1327 audit(1719880953.031:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:42:34.831897 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:42:34.831909 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:42:34.831920 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:42:34.831944 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:42:34.831956 kernel: audit: type=1334 audit(1719880954.717:84): prog-id=12 op=LOAD Jul 2 00:42:34.831966 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:42:34.831976 systemd[1]: Stopped initrd-switch-root.service. Jul 2 00:42:34.831986 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:42:34.831997 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 00:42:34.832007 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 00:42:34.832020 systemd[1]: Created slice system-getty.slice. Jul 2 00:42:34.832031 systemd[1]: Created slice system-modprobe.slice. Jul 2 00:42:34.832045 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 00:42:34.832056 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 00:42:34.832067 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 00:42:34.832077 systemd[1]: Created slice user.slice. Jul 2 00:42:34.832087 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:42:34.832097 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 00:42:34.832109 systemd[1]: Set up automount boot.automount. Jul 2 00:42:34.832120 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 00:42:34.832142 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 00:42:34.832153 systemd[1]: Stopped target initrd-fs.target. Jul 2 00:42:34.832164 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 00:42:34.832174 systemd[1]: Reached target integritysetup.target. Jul 2 00:42:34.832185 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:42:34.832196 systemd[1]: Reached target remote-fs.target. Jul 2 00:42:34.832207 systemd[1]: Reached target slices.target. Jul 2 00:42:34.832218 systemd[1]: Reached target swap.target. Jul 2 00:42:34.832229 systemd[1]: Reached target torcx.target. Jul 2 00:42:34.832239 systemd[1]: Reached target veritysetup.target. Jul 2 00:42:34.832250 systemd[1]: Listening on systemd-coredump.socket. Jul 2 00:42:34.832260 systemd[1]: Listening on systemd-initctl.socket. Jul 2 00:42:34.832270 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:42:34.832281 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:42:34.832292 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:42:34.832303 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 00:42:34.832315 systemd[1]: Mounting dev-hugepages.mount... Jul 2 00:42:34.832325 systemd[1]: Mounting dev-mqueue.mount... Jul 2 00:42:34.832336 systemd[1]: Mounting media.mount... Jul 2 00:42:34.832347 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 00:42:34.832357 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 00:42:34.832367 systemd[1]: Mounting tmp.mount... Jul 2 00:42:34.832378 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 00:42:34.832388 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:42:34.832399 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:42:34.832409 systemd[1]: Starting modprobe@configfs.service... Jul 2 00:42:34.832422 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:42:34.832432 systemd[1]: Starting modprobe@drm.service... Jul 2 00:42:34.832444 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:42:34.832455 systemd[1]: Starting modprobe@fuse.service... Jul 2 00:42:34.832466 systemd[1]: Starting modprobe@loop.service... Jul 2 00:42:34.832477 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:42:34.832488 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:42:34.832498 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 00:42:34.832510 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:42:34.832521 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:42:34.832539 systemd[1]: Stopped systemd-journald.service. Jul 2 00:42:34.832550 systemd[1]: Starting systemd-journald.service... Jul 2 00:42:34.832561 kernel: loop: module loaded Jul 2 00:42:34.832572 kernel: fuse: init (API version 7.34) Jul 2 00:42:34.832584 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:42:34.832594 systemd[1]: Starting systemd-network-generator.service... Jul 2 00:42:34.832604 systemd[1]: Starting systemd-remount-fs.service... Jul 2 00:42:34.832616 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:42:34.832626 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:42:34.832637 systemd[1]: Stopped verity-setup.service. Jul 2 00:42:34.832649 systemd[1]: Mounted dev-hugepages.mount. Jul 2 00:42:34.832660 systemd[1]: Mounted dev-mqueue.mount. Jul 2 00:42:34.832671 systemd[1]: Mounted media.mount. Jul 2 00:42:34.832683 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 00:42:34.832694 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 00:42:34.832704 systemd[1]: Mounted tmp.mount. Jul 2 00:42:34.832714 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:42:34.832725 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:42:34.832735 systemd[1]: Finished modprobe@configfs.service. Jul 2 00:42:34.832747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:42:34.832758 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:42:34.832770 systemd-journald[996]: Journal started Jul 2 00:42:34.832812 systemd-journald[996]: Runtime Journal (/run/log/journal/7b1c4fbe64f04d7c99218c7139018e88) is 6.0M, max 48.7M, 42.6M free. Jul 2 00:42:32.860000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:42:32.972000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:42:32.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:42:32.973000 audit: BPF prog-id=10 op=LOAD Jul 2 00:42:32.973000 audit: BPF prog-id=10 op=UNLOAD Jul 2 00:42:32.973000 audit: BPF prog-id=11 op=LOAD Jul 2 00:42:32.973000 audit: BPF prog-id=11 op=UNLOAD Jul 2 00:42:33.029000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 00:42:33.029000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58c4 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:42:33.029000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:42:33.031000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 00:42:33.031000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c59a9 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:42:33.031000 audit: CWD cwd="/" Jul 2 00:42:33.031000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:42:33.031000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:42:33.031000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:42:34.717000 audit: BPF prog-id=12 op=LOAD Jul 2 00:42:34.717000 audit: BPF prog-id=3 op=UNLOAD Jul 2 00:42:34.717000 audit: BPF prog-id=13 op=LOAD Jul 2 00:42:34.718000 audit: BPF prog-id=14 op=LOAD Jul 2 00:42:34.718000 audit: BPF prog-id=4 op=UNLOAD Jul 2 00:42:34.718000 audit: BPF prog-id=5 op=UNLOAD Jul 2 00:42:34.718000 audit: BPF prog-id=15 op=LOAD Jul 2 00:42:34.718000 audit: BPF prog-id=12 op=UNLOAD Jul 2 00:42:34.718000 audit: BPF prog-id=16 op=LOAD Jul 2 00:42:34.718000 audit: BPF prog-id=17 op=LOAD Jul 2 00:42:34.718000 audit: BPF prog-id=13 op=UNLOAD Jul 2 00:42:34.718000 audit: BPF prog-id=14 op=UNLOAD Jul 2 00:42:34.719000 audit: BPF prog-id=18 op=LOAD Jul 2 00:42:34.719000 audit: BPF prog-id=15 op=UNLOAD Jul 2 00:42:34.719000 audit: BPF prog-id=19 op=LOAD Jul 2 00:42:34.719000 audit: BPF prog-id=20 op=LOAD Jul 2 00:42:34.719000 audit: BPF prog-id=16 op=UNLOAD Jul 2 00:42:34.719000 audit: BPF prog-id=17 op=UNLOAD Jul 2 00:42:34.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.729000 audit: BPF prog-id=18 op=UNLOAD Jul 2 00:42:34.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.802000 audit: BPF prog-id=21 op=LOAD Jul 2 00:42:34.803000 audit: BPF prog-id=22 op=LOAD Jul 2 00:42:34.803000 audit: BPF prog-id=23 op=LOAD Jul 2 00:42:34.803000 audit: BPF prog-id=19 op=UNLOAD Jul 2 00:42:34.803000 audit: BPF prog-id=20 op=UNLOAD Jul 2 00:42:34.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.829000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 00:42:34.829000 audit[996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff596cf30 a2=4000 a3=1 items=0 ppid=1 pid=996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:42:34.829000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 00:42:34.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:33.027291 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:42:34.716154 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:42:34.834176 systemd[1]: Started systemd-journald.service. Jul 2 00:42:33.027745 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 00:42:34.716167 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 00:42:33.027766 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 00:42:34.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.720110 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:42:33.027798 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 00:42:33.027807 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 00:42:33.027848 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 00:42:34.834584 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:42:33.027863 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 00:42:34.834722 systemd[1]: Finished modprobe@drm.service. Jul 2 00:42:33.028054 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 00:42:33.028089 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 00:42:33.028101 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 00:42:33.028884 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 00:42:33.028923 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 00:42:34.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:33.028942 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 00:42:33.028956 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 00:42:33.028972 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 00:42:33.028986 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 00:42:34.464694 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:34Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:42:34.464965 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:34Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:42:34.835796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:42:34.465188 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:34Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:42:34.465362 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:34Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:42:34.835942 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:42:34.465410 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:34Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 00:42:34.465468 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-07-02T00:42:34Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 00:42:34.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.836928 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:42:34.837303 systemd[1]: Finished modprobe@fuse.service. Jul 2 00:42:34.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.838101 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:42:34.838311 systemd[1]: Finished modprobe@loop.service. Jul 2 00:42:34.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.839158 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:42:34.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.840048 systemd[1]: Finished systemd-network-generator.service. Jul 2 00:42:34.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.841050 systemd[1]: Finished systemd-remount-fs.service. Jul 2 00:42:34.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.842280 systemd[1]: Reached target network-pre.target. Jul 2 00:42:34.843856 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 00:42:34.845578 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 00:42:34.846257 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:42:34.848809 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 00:42:34.850560 systemd[1]: Starting systemd-journal-flush.service... Jul 2 00:42:34.851324 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:42:34.852385 systemd[1]: Starting systemd-random-seed.service... Jul 2 00:42:34.853115 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:42:34.854100 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:42:34.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.856258 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 00:42:34.857043 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 00:42:34.858719 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 00:42:34.859582 systemd-journald[996]: Time spent on flushing to /var/log/journal/7b1c4fbe64f04d7c99218c7139018e88 is 16.006ms for 1003 entries. Jul 2 00:42:34.859582 systemd-journald[996]: System Journal (/var/log/journal/7b1c4fbe64f04d7c99218c7139018e88) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:42:34.885417 systemd-journald[996]: Received client request to flush runtime journal. Jul 2 00:42:34.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.860693 systemd[1]: Starting systemd-sysusers.service... Jul 2 00:42:34.862921 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:42:34.886600 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:42:34.864991 systemd[1]: Starting systemd-udev-settle.service... Jul 2 00:42:34.865957 systemd[1]: Finished systemd-random-seed.service. Jul 2 00:42:34.866845 systemd[1]: Reached target first-boot-complete.target. Jul 2 00:42:34.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:34.878652 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:42:34.879981 systemd[1]: Finished systemd-sysusers.service. Jul 2 00:42:34.886272 systemd[1]: Finished systemd-journal-flush.service. Jul 2 00:42:35.222933 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 00:42:35.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.224000 audit: BPF prog-id=24 op=LOAD Jul 2 00:42:35.224000 audit: BPF prog-id=25 op=LOAD Jul 2 00:42:35.224000 audit: BPF prog-id=7 op=UNLOAD Jul 2 00:42:35.224000 audit: BPF prog-id=8 op=UNLOAD Jul 2 00:42:35.226046 systemd[1]: Starting systemd-udevd.service... Jul 2 00:42:35.246695 systemd-udevd[1034]: Using default interface naming scheme 'v252'. Jul 2 00:42:35.257554 systemd[1]: Started systemd-udevd.service. Jul 2 00:42:35.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.258000 audit: BPF prog-id=26 op=LOAD Jul 2 00:42:35.259667 systemd[1]: Starting systemd-networkd.service... Jul 2 00:42:35.266000 audit: BPF prog-id=27 op=LOAD Jul 2 00:42:35.267000 audit: BPF prog-id=28 op=LOAD Jul 2 00:42:35.267000 audit: BPF prog-id=29 op=LOAD Jul 2 00:42:35.267836 systemd[1]: Starting systemd-userdbd.service... Jul 2 00:42:35.299933 systemd[1]: Started systemd-userdbd.service. Jul 2 00:42:35.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.302583 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 2 00:42:35.314091 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 00:42:35.358375 systemd-networkd[1041]: lo: Link UP Jul 2 00:42:35.358385 systemd-networkd[1041]: lo: Gained carrier Jul 2 00:42:35.358693 systemd-networkd[1041]: Enumeration completed Jul 2 00:42:35.358786 systemd-networkd[1041]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:42:35.358797 systemd[1]: Started systemd-networkd.service. Jul 2 00:42:35.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.359868 systemd-networkd[1041]: eth0: Link UP Jul 2 00:42:35.359874 systemd-networkd[1041]: eth0: Gained carrier Jul 2 00:42:35.367459 systemd[1]: Finished systemd-udev-settle.service. Jul 2 00:42:35.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.369334 systemd[1]: Starting lvm2-activation-early.service... Jul 2 00:42:35.384854 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:42:35.386237 systemd-networkd[1041]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:42:35.409078 systemd[1]: Finished lvm2-activation-early.service. Jul 2 00:42:35.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.409943 systemd[1]: Reached target cryptsetup.target. Jul 2 00:42:35.411665 systemd[1]: Starting lvm2-activation.service... Jul 2 00:42:35.415096 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:42:35.454991 systemd[1]: Finished lvm2-activation.service. Jul 2 00:42:35.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.455816 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:42:35.456503 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:42:35.456542 systemd[1]: Reached target local-fs.target. Jul 2 00:42:35.457113 systemd[1]: Reached target machines.target. Jul 2 00:42:35.458885 systemd[1]: Starting ldconfig.service... Jul 2 00:42:35.459837 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:42:35.459898 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:42:35.460908 systemd[1]: Starting systemd-boot-update.service... Jul 2 00:42:35.462676 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 00:42:35.464890 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 00:42:35.467694 systemd[1]: Starting systemd-sysext.service... Jul 2 00:42:35.468760 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1070 (bootctl) Jul 2 00:42:35.469788 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 00:42:35.474336 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 00:42:35.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.482626 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 00:42:35.488594 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 00:42:35.488781 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 00:42:35.500165 kernel: loop0: detected capacity change from 0 to 194512 Jul 2 00:42:35.539292 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 00:42:35.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.547168 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:42:35.559285 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) Jul 2 00:42:35.559285 systemd-fsck[1080]: /dev/vda1: 236 files, 117047/258078 clusters Jul 2 00:42:35.560770 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 00:42:35.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.566159 kernel: loop1: detected capacity change from 0 to 194512 Jul 2 00:42:35.570891 (sd-sysext)[1084]: Using extensions 'kubernetes'. Jul 2 00:42:35.571482 (sd-sysext)[1084]: Merged extensions into '/usr'. Jul 2 00:42:35.590042 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:42:35.591352 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:42:35.592995 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:42:35.594772 systemd[1]: Starting modprobe@loop.service... Jul 2 00:42:35.595394 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:42:35.595516 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:42:35.596267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:42:35.596382 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:42:35.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.597565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:42:35.597689 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:42:35.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.598859 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:42:35.598970 systemd[1]: Finished modprobe@loop.service. Jul 2 00:42:35.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.600014 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:42:35.600108 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:42:35.637478 ldconfig[1069]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:42:35.641084 systemd[1]: Finished ldconfig.service. Jul 2 00:42:35.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.819073 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:42:35.820812 systemd[1]: Mounting boot.mount... Jul 2 00:42:35.822526 systemd[1]: Mounting usr-share-oem.mount... Jul 2 00:42:35.828651 systemd[1]: Mounted boot.mount. Jul 2 00:42:35.829479 systemd[1]: Mounted usr-share-oem.mount. Jul 2 00:42:35.831469 systemd[1]: Finished systemd-sysext.service. Jul 2 00:42:35.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.833650 systemd[1]: Starting ensure-sysext.service... Jul 2 00:42:35.835501 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 00:42:35.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:35.838414 systemd[1]: Finished systemd-boot-update.service. Jul 2 00:42:35.840868 systemd[1]: Reloading. Jul 2 00:42:35.850906 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 00:42:35.852417 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:42:35.854855 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:42:35.878893 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2024-07-02T00:42:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:42:35.878923 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2024-07-02T00:42:35Z" level=info msg="torcx already run" Jul 2 00:42:35.936996 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:42:35.937016 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:42:35.952814 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:42:35.996000 audit: BPF prog-id=30 op=LOAD Jul 2 00:42:35.996000 audit: BPF prog-id=26 op=UNLOAD Jul 2 00:42:35.997000 audit: BPF prog-id=31 op=LOAD Jul 2 00:42:35.997000 audit: BPF prog-id=32 op=LOAD Jul 2 00:42:35.997000 audit: BPF prog-id=24 op=UNLOAD Jul 2 00:42:35.997000 audit: BPF prog-id=25 op=UNLOAD Jul 2 00:42:35.999000 audit: BPF prog-id=33 op=LOAD Jul 2 00:42:35.999000 audit: BPF prog-id=27 op=UNLOAD Jul 2 00:42:35.999000 audit: BPF prog-id=34 op=LOAD Jul 2 00:42:35.999000 audit: BPF prog-id=35 op=LOAD Jul 2 00:42:35.999000 audit: BPF prog-id=28 op=UNLOAD Jul 2 00:42:35.999000 audit: BPF prog-id=29 op=UNLOAD Jul 2 00:42:36.000000 audit: BPF prog-id=36 op=LOAD Jul 2 00:42:36.000000 audit: BPF prog-id=21 op=UNLOAD Jul 2 00:42:36.000000 audit: BPF prog-id=37 op=LOAD Jul 2 00:42:36.000000 audit: BPF prog-id=38 op=LOAD Jul 2 00:42:36.000000 audit: BPF prog-id=22 op=UNLOAD Jul 2 00:42:36.000000 audit: BPF prog-id=23 op=UNLOAD Jul 2 00:42:36.002354 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 00:42:36.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.007327 systemd[1]: Starting audit-rules.service... Jul 2 00:42:36.009328 systemd[1]: Starting clean-ca-certificates.service... Jul 2 00:42:36.011518 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 00:42:36.016000 audit: BPF prog-id=39 op=LOAD Jul 2 00:42:36.017939 systemd[1]: Starting systemd-resolved.service... Jul 2 00:42:36.019000 audit: BPF prog-id=40 op=LOAD Jul 2 00:42:36.020492 systemd[1]: Starting systemd-timesyncd.service... Jul 2 00:42:36.022364 systemd[1]: Starting systemd-update-utmp.service... Jul 2 00:42:36.025869 systemd[1]: Finished clean-ca-certificates.service. Jul 2 00:42:36.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.029964 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:42:36.031609 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:42:36.033286 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:42:36.034000 audit[1160]: SYSTEM_BOOT pid=1160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.035095 systemd[1]: Starting modprobe@loop.service... Jul 2 00:42:36.035723 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:42:36.035859 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:42:36.035968 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:42:36.036831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:42:36.036967 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:42:36.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.037999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:42:36.038109 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:42:36.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.039191 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:42:36.039307 systemd[1]: Finished modprobe@loop.service. Jul 2 00:42:36.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.043736 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 00:42:36.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.045058 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:42:36.046316 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:42:36.047983 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:42:36.049770 systemd[1]: Starting modprobe@loop.service... Jul 2 00:42:36.050719 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:42:36.050867 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:42:36.052111 systemd[1]: Starting systemd-update-done.service... Jul 2 00:42:36.052772 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:42:36.053787 systemd[1]: Finished systemd-update-utmp.service. Jul 2 00:42:36.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.054945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:42:36.055071 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:42:36.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.056309 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:42:36.056430 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:42:36.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.057468 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:42:36.057580 systemd[1]: Finished modprobe@loop.service. Jul 2 00:42:36.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.061633 systemd[1]: Finished systemd-update-done.service. Jul 2 00:42:36.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.062995 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:42:36.065440 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:42:36.067468 systemd[1]: Starting modprobe@drm.service... Jul 2 00:42:36.069545 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:42:36.071413 systemd[1]: Starting modprobe@loop.service... Jul 2 00:42:36.072124 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:42:36.072256 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:42:36.073703 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 00:42:36.074519 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:42:36.075639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:42:36.075780 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:42:36.076896 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:42:36.077008 systemd[1]: Finished modprobe@drm.service. Jul 2 00:42:36.078027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:42:36.078165 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:42:36.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.079055 systemd[1]: Started systemd-timesyncd.service. Jul 2 00:42:36.079526 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:42:36.079576 systemd-timesyncd[1158]: Initial clock synchronization to Tue 2024-07-02 00:42:36.340537 UTC. Jul 2 00:42:36.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.080452 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:42:36.080593 systemd[1]: Finished modprobe@loop.service. Jul 2 00:42:36.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.082245 systemd[1]: Reached target time-set.target. Jul 2 00:42:36.082884 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:42:36.082918 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:42:36.083418 systemd[1]: Finished ensure-sysext.service. Jul 2 00:42:36.083716 systemd-resolved[1155]: Positive Trust Anchors: Jul 2 00:42:36.083728 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:42:36.083754 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:42:36.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:42:36.093000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 00:42:36.093000 audit[1185]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc60f5c50 a2=420 a3=0 items=0 ppid=1151 pid=1185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:42:36.093000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 00:42:36.094123 systemd-resolved[1155]: Defaulting to hostname 'linux'. Jul 2 00:42:36.094365 augenrules[1185]: No rules Jul 2 00:42:36.095206 systemd[1]: Finished audit-rules.service. Jul 2 00:42:36.095879 systemd[1]: Started systemd-resolved.service. Jul 2 00:42:36.096541 systemd[1]: Reached target network.target. Jul 2 00:42:36.097084 systemd[1]: Reached target nss-lookup.target. Jul 2 00:42:36.097679 systemd[1]: Reached target sysinit.target. Jul 2 00:42:36.098282 systemd[1]: Started motdgen.path. Jul 2 00:42:36.098781 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 00:42:36.099751 systemd[1]: Started logrotate.timer. Jul 2 00:42:36.100397 systemd[1]: Started mdadm.timer. Jul 2 00:42:36.100881 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 00:42:36.101498 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:42:36.101523 systemd[1]: Reached target paths.target. Jul 2 00:42:36.102036 systemd[1]: Reached target timers.target. Jul 2 00:42:36.102932 systemd[1]: Listening on dbus.socket. Jul 2 00:42:36.104749 systemd[1]: Starting docker.socket... Jul 2 00:42:36.107666 systemd[1]: Listening on sshd.socket. Jul 2 00:42:36.108345 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:42:36.108781 systemd[1]: Listening on docker.socket. Jul 2 00:42:36.109440 systemd[1]: Reached target sockets.target. Jul 2 00:42:36.110072 systemd[1]: Reached target basic.target. Jul 2 00:42:36.110662 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:42:36.110692 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:42:36.111669 systemd[1]: Starting containerd.service... Jul 2 00:42:36.113262 systemd[1]: Starting dbus.service... Jul 2 00:42:36.114758 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 00:42:36.116688 systemd[1]: Starting extend-filesystems.service... Jul 2 00:42:36.117513 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 00:42:36.119287 systemd[1]: Starting motdgen.service... Jul 2 00:42:36.121716 systemd[1]: Starting prepare-helm.service... Jul 2 00:42:36.123618 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 00:42:36.127265 systemd[1]: Starting sshd-keygen.service... Jul 2 00:42:36.132036 systemd[1]: Starting systemd-logind.service... Jul 2 00:42:36.132845 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:42:36.133748 jq[1194]: false Jul 2 00:42:36.132953 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:42:36.133440 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:42:36.134260 systemd[1]: Starting update-engine.service... Jul 2 00:42:36.136517 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 00:42:36.139643 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:42:36.139835 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 00:42:36.140430 jq[1209]: true Jul 2 00:42:36.140798 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:42:36.140984 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 00:42:36.152442 jq[1213]: true Jul 2 00:42:36.159208 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:42:36.159471 systemd[1]: Finished motdgen.service. Jul 2 00:42:36.162949 tar[1211]: linux-arm64/helm Jul 2 00:42:36.166617 dbus-daemon[1193]: [system] SELinux support is enabled Jul 2 00:42:36.166777 systemd[1]: Started dbus.service. Jul 2 00:42:36.169402 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:42:36.169427 systemd[1]: Reached target system-config.target. Jul 2 00:42:36.170006 extend-filesystems[1195]: Found loop1 Jul 2 00:42:36.170006 extend-filesystems[1195]: Found vda Jul 2 00:42:36.170006 extend-filesystems[1195]: Found vda1 Jul 2 00:42:36.170006 extend-filesystems[1195]: Found vda2 Jul 2 00:42:36.170006 extend-filesystems[1195]: Found vda3 Jul 2 00:42:36.170006 extend-filesystems[1195]: Found usr Jul 2 00:42:36.170006 extend-filesystems[1195]: Found vda4 Jul 2 00:42:36.170006 extend-filesystems[1195]: Found vda6 Jul 2 00:42:36.170006 extend-filesystems[1195]: Found vda7 Jul 2 00:42:36.170006 extend-filesystems[1195]: Found vda9 Jul 2 00:42:36.170006 extend-filesystems[1195]: Checking size of /dev/vda9 Jul 2 00:42:36.170086 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:42:36.170101 systemd[1]: Reached target user-config.target. Jul 2 00:42:36.182732 extend-filesystems[1195]: Resized partition /dev/vda9 Jul 2 00:42:36.200514 extend-filesystems[1238]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 00:42:36.211211 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:42:36.230835 systemd-logind[1205]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 00:42:36.239497 systemd-logind[1205]: New seat seat0. Jul 2 00:42:36.240151 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:42:36.240992 bash[1234]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:42:36.241721 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 00:42:36.243017 systemd[1]: Started systemd-logind.service. Jul 2 00:42:36.253413 extend-filesystems[1238]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:42:36.253413 extend-filesystems[1238]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:42:36.253413 extend-filesystems[1238]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:42:36.257749 extend-filesystems[1195]: Resized filesystem in /dev/vda9 Jul 2 00:42:36.258515 update_engine[1206]: I0702 00:42:36.254973 1206 main.cc:92] Flatcar Update Engine starting Jul 2 00:42:36.255061 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:42:36.255228 systemd[1]: Finished extend-filesystems.service. Jul 2 00:42:36.261184 systemd[1]: Started update-engine.service. Jul 2 00:42:36.261344 update_engine[1206]: I0702 00:42:36.261209 1206 update_check_scheduler.cc:74] Next update check in 9m8s Jul 2 00:42:36.265297 systemd[1]: Started locksmithd.service. Jul 2 00:42:36.275599 env[1216]: time="2024-07-02T00:42:36.275553360Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 00:42:36.294183 env[1216]: time="2024-07-02T00:42:36.294120640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:42:36.294620 env[1216]: time="2024-07-02T00:42:36.294600400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:42:36.295820 env[1216]: time="2024-07-02T00:42:36.295793200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:42:36.295933 env[1216]: time="2024-07-02T00:42:36.295917800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:42:36.296197 env[1216]: time="2024-07-02T00:42:36.296174600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:42:36.296274 env[1216]: time="2024-07-02T00:42:36.296259960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:42:36.296337 env[1216]: time="2024-07-02T00:42:36.296322840Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:42:36.296405 env[1216]: time="2024-07-02T00:42:36.296391480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:42:36.296535 env[1216]: time="2024-07-02T00:42:36.296519480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:42:36.296803 env[1216]: time="2024-07-02T00:42:36.296783200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:42:36.297002 env[1216]: time="2024-07-02T00:42:36.296982480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:42:36.297117 env[1216]: time="2024-07-02T00:42:36.297101640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:42:36.297236 env[1216]: time="2024-07-02T00:42:36.297219640Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:42:36.297309 env[1216]: time="2024-07-02T00:42:36.297294600Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:42:36.301899 env[1216]: time="2024-07-02T00:42:36.301876720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:42:36.302156 env[1216]: time="2024-07-02T00:42:36.302123320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:42:36.302223 env[1216]: time="2024-07-02T00:42:36.302209240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:42:36.302456 env[1216]: time="2024-07-02T00:42:36.302438440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:42:36.302579 env[1216]: time="2024-07-02T00:42:36.302564720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:42:36.302758 env[1216]: time="2024-07-02T00:42:36.302740920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:42:36.302825 env[1216]: time="2024-07-02T00:42:36.302811960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:42:36.303254 env[1216]: time="2024-07-02T00:42:36.303231120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:42:36.303352 env[1216]: time="2024-07-02T00:42:36.303337880Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 00:42:36.303554 env[1216]: time="2024-07-02T00:42:36.303537120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:42:36.303623 env[1216]: time="2024-07-02T00:42:36.303609640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:42:36.303680 env[1216]: time="2024-07-02T00:42:36.303667920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:42:36.303928 env[1216]: time="2024-07-02T00:42:36.303910360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:42:36.304115 env[1216]: time="2024-07-02T00:42:36.304098520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:42:36.304457 env[1216]: time="2024-07-02T00:42:36.304437200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:42:36.304572 env[1216]: time="2024-07-02T00:42:36.304556320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.304677 env[1216]: time="2024-07-02T00:42:36.304661280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:42:36.304830 env[1216]: time="2024-07-02T00:42:36.304817400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.304958 env[1216]: time="2024-07-02T00:42:36.304943120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.305064 env[1216]: time="2024-07-02T00:42:36.305049240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.305126 env[1216]: time="2024-07-02T00:42:36.305112760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.305205 env[1216]: time="2024-07-02T00:42:36.305191320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.305268 env[1216]: time="2024-07-02T00:42:36.305255400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.305479 env[1216]: time="2024-07-02T00:42:36.305463000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.305546 env[1216]: time="2024-07-02T00:42:36.305533320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.305631 env[1216]: time="2024-07-02T00:42:36.305617800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:42:36.306003 env[1216]: time="2024-07-02T00:42:36.305984560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.306083 env[1216]: time="2024-07-02T00:42:36.306069640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.306155 env[1216]: time="2024-07-02T00:42:36.306141600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.306215 env[1216]: time="2024-07-02T00:42:36.306202040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:42:36.306276 env[1216]: time="2024-07-02T00:42:36.306261800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 00:42:36.306328 env[1216]: time="2024-07-02T00:42:36.306315960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:42:36.306405 env[1216]: time="2024-07-02T00:42:36.306390920Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 00:42:36.306486 env[1216]: time="2024-07-02T00:42:36.306472680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:42:36.306766 env[1216]: time="2024-07-02T00:42:36.306711160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:42:36.312615 env[1216]: time="2024-07-02T00:42:36.307093120Z" level=info msg="Connect containerd service" Jul 2 00:42:36.312615 env[1216]: time="2024-07-02T00:42:36.307202040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:42:36.313645 env[1216]: time="2024-07-02T00:42:36.313619840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:42:36.314001 env[1216]: time="2024-07-02T00:42:36.313964640Z" level=info msg="Start subscribing containerd event" Jul 2 00:42:36.314191 env[1216]: time="2024-07-02T00:42:36.314175400Z" level=info msg="Start recovering state" Jul 2 00:42:36.314485 env[1216]: time="2024-07-02T00:42:36.314459280Z" level=info msg="Start event monitor" Jul 2 00:42:36.314647 env[1216]: time="2024-07-02T00:42:36.314633120Z" level=info msg="Start snapshots syncer" Jul 2 00:42:36.314718 env[1216]: time="2024-07-02T00:42:36.314706960Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:42:36.314814 env[1216]: time="2024-07-02T00:42:36.314801520Z" level=info msg="Start streaming server" Jul 2 00:42:36.315340 env[1216]: time="2024-07-02T00:42:36.315313720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:42:36.315464 env[1216]: time="2024-07-02T00:42:36.315451880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:42:36.316537 systemd[1]: Started containerd.service. Jul 2 00:42:36.322290 env[1216]: time="2024-07-02T00:42:36.315602600Z" level=info msg="containerd successfully booted in 0.040681s" Jul 2 00:42:36.332211 locksmithd[1246]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:42:36.569207 tar[1211]: linux-arm64/LICENSE Jul 2 00:42:36.569319 tar[1211]: linux-arm64/README.md Jul 2 00:42:36.573551 systemd[1]: Finished prepare-helm.service. Jul 2 00:42:36.718229 systemd-networkd[1041]: eth0: Gained IPv6LL Jul 2 00:42:36.719913 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 00:42:36.720995 systemd[1]: Reached target network-online.target. Jul 2 00:42:36.723384 systemd[1]: Starting kubelet.service... Jul 2 00:42:37.216013 systemd[1]: Started kubelet.service. Jul 2 00:42:37.747763 kubelet[1262]: E0702 00:42:37.747680 1262 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:42:37.749932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:42:37.750059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:42:40.418243 sshd_keygen[1217]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:42:40.436499 systemd[1]: Finished sshd-keygen.service. Jul 2 00:42:40.438568 systemd[1]: Starting issuegen.service... Jul 2 00:42:40.443237 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:42:40.443397 systemd[1]: Finished issuegen.service. Jul 2 00:42:40.445505 systemd[1]: Starting systemd-user-sessions.service... Jul 2 00:42:40.451899 systemd[1]: Finished systemd-user-sessions.service. Jul 2 00:42:40.454422 systemd[1]: Started getty@tty1.service. Jul 2 00:42:40.456230 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 2 00:42:40.457079 systemd[1]: Reached target getty.target. Jul 2 00:42:40.457791 systemd[1]: Reached target multi-user.target. Jul 2 00:42:40.459523 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 00:42:40.466371 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 00:42:40.466516 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 00:42:40.467324 systemd[1]: Startup finished in 584ms (kernel) + 4.253s (initrd) + 7.646s (userspace) = 12.484s. Jul 2 00:42:41.006296 systemd[1]: Created slice system-sshd.slice. Jul 2 00:42:41.007408 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:56044.service. Jul 2 00:42:41.055628 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 56044 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:42:41.057599 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:42:41.071021 systemd[1]: Created slice user-500.slice. Jul 2 00:42:41.072126 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 00:42:41.074178 systemd-logind[1205]: New session 1 of user core. Jul 2 00:42:41.080347 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 00:42:41.081680 systemd[1]: Starting user@500.service... Jul 2 00:42:41.084611 (systemd)[1289]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:42:41.143848 systemd[1289]: Queued start job for default target default.target. Jul 2 00:42:41.144361 systemd[1289]: Reached target paths.target. Jul 2 00:42:41.144382 systemd[1289]: Reached target sockets.target. Jul 2 00:42:41.144394 systemd[1289]: Reached target timers.target. Jul 2 00:42:41.144404 systemd[1289]: Reached target basic.target. Jul 2 00:42:41.144458 systemd[1289]: Reached target default.target. Jul 2 00:42:41.144486 systemd[1289]: Startup finished in 53ms. Jul 2 00:42:41.144525 systemd[1]: Started user@500.service. Jul 2 00:42:41.145457 systemd[1]: Started session-1.scope. Jul 2 00:42:41.197558 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:56058.service. Jul 2 00:42:41.245589 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 56058 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:42:41.247140 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:42:41.250441 systemd-logind[1205]: New session 2 of user core. Jul 2 00:42:41.251622 systemd[1]: Started session-2.scope. Jul 2 00:42:41.305410 sshd[1298]: pam_unix(sshd:session): session closed for user core Jul 2 00:42:41.307827 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:56058.service: Deactivated successfully. Jul 2 00:42:41.308414 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:42:41.308921 systemd-logind[1205]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:42:41.309951 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:56062.service. Jul 2 00:42:41.310822 systemd-logind[1205]: Removed session 2. Jul 2 00:42:41.351025 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 56062 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:42:41.352144 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:42:41.356063 systemd[1]: Started session-3.scope. Jul 2 00:42:41.356445 systemd-logind[1205]: New session 3 of user core. Jul 2 00:42:41.406261 sshd[1304]: pam_unix(sshd:session): session closed for user core Jul 2 00:42:41.408978 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:56062.service: Deactivated successfully. Jul 2 00:42:41.409606 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:42:41.410096 systemd-logind[1205]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:42:41.411090 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:56076.service. Jul 2 00:42:41.411651 systemd-logind[1205]: Removed session 3. Jul 2 00:42:41.452324 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 56076 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:42:41.453771 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:42:41.456935 systemd-logind[1205]: New session 4 of user core. Jul 2 00:42:41.457730 systemd[1]: Started session-4.scope. Jul 2 00:42:41.511082 sshd[1310]: pam_unix(sshd:session): session closed for user core Jul 2 00:42:41.514183 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:56076.service: Deactivated successfully. Jul 2 00:42:41.514793 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:42:41.515250 systemd-logind[1205]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:42:41.516177 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:56078.service. Jul 2 00:42:41.516733 systemd-logind[1205]: Removed session 4. Jul 2 00:42:41.557504 sshd[1316]: Accepted publickey for core from 10.0.0.1 port 56078 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:42:41.559465 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:42:41.563623 systemd[1]: Started session-5.scope. Jul 2 00:42:41.564051 systemd-logind[1205]: New session 5 of user core. Jul 2 00:42:41.624607 sudo[1320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:42:41.624846 sudo[1320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:42:41.682583 systemd[1]: Starting docker.service... Jul 2 00:42:41.766581 env[1332]: time="2024-07-02T00:42:41.766528110Z" level=info msg="Starting up" Jul 2 00:42:41.768060 env[1332]: time="2024-07-02T00:42:41.768031286Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 00:42:41.768173 env[1332]: time="2024-07-02T00:42:41.768134136Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 00:42:41.768248 env[1332]: time="2024-07-02T00:42:41.768230016Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 00:42:41.768483 env[1332]: time="2024-07-02T00:42:41.768465598Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 00:42:41.770308 env[1332]: time="2024-07-02T00:42:41.770275533Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 00:42:41.770308 env[1332]: time="2024-07-02T00:42:41.770298891Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 00:42:41.770406 env[1332]: time="2024-07-02T00:42:41.770313730Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 00:42:41.770406 env[1332]: time="2024-07-02T00:42:41.770322535Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 00:42:41.776184 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3185895038-merged.mount: Deactivated successfully. Jul 2 00:42:41.888012 env[1332]: time="2024-07-02T00:42:41.887932190Z" level=info msg="Loading containers: start." Jul 2 00:42:41.994193 kernel: Initializing XFRM netlink socket Jul 2 00:42:42.018772 env[1332]: time="2024-07-02T00:42:42.018739389Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 00:42:42.071464 systemd-networkd[1041]: docker0: Link UP Jul 2 00:42:42.080214 env[1332]: time="2024-07-02T00:42:42.080184009Z" level=info msg="Loading containers: done." Jul 2 00:42:42.095907 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1426696304-merged.mount: Deactivated successfully. Jul 2 00:42:42.098731 env[1332]: time="2024-07-02T00:42:42.098701061Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:42:42.098997 env[1332]: time="2024-07-02T00:42:42.098979566Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 00:42:42.099161 env[1332]: time="2024-07-02T00:42:42.099135086Z" level=info msg="Daemon has completed initialization" Jul 2 00:42:42.113450 systemd[1]: Started docker.service. Jul 2 00:42:42.120812 env[1332]: time="2024-07-02T00:42:42.120697596Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:42:42.720007 env[1216]: time="2024-07-02T00:42:42.719954778Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 00:42:43.352008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3383101683.mount: Deactivated successfully. Jul 2 00:42:44.779505 env[1216]: time="2024-07-02T00:42:44.779462266Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:44.780904 env[1216]: time="2024-07-02T00:42:44.780862139Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:44.783927 env[1216]: time="2024-07-02T00:42:44.783900708Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:44.786399 env[1216]: time="2024-07-02T00:42:44.786368413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:44.787121 env[1216]: time="2024-07-02T00:42:44.787093306Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jul 2 00:42:44.795812 env[1216]: time="2024-07-02T00:42:44.795784642Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 00:42:47.845856 env[1216]: time="2024-07-02T00:42:47.845807547Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:47.848773 env[1216]: time="2024-07-02T00:42:47.848724054Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:47.850530 env[1216]: time="2024-07-02T00:42:47.850497470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:47.852317 env[1216]: time="2024-07-02T00:42:47.852278955Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:47.853914 env[1216]: time="2024-07-02T00:42:47.853877079Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jul 2 00:42:47.864030 env[1216]: time="2024-07-02T00:42:47.863995047Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 00:42:48.000954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:42:48.001306 systemd[1]: Stopped kubelet.service. Jul 2 00:42:48.002890 systemd[1]: Starting kubelet.service... Jul 2 00:42:48.086925 systemd[1]: Started kubelet.service. Jul 2 00:42:48.133376 kubelet[1488]: E0702 00:42:48.133245 1488 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:42:48.136440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:42:48.136577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:42:49.190476 env[1216]: time="2024-07-02T00:42:49.190415782Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:49.191817 env[1216]: time="2024-07-02T00:42:49.191790077Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:49.193489 env[1216]: time="2024-07-02T00:42:49.193459136Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:49.195441 env[1216]: time="2024-07-02T00:42:49.195398280Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:49.196289 env[1216]: time="2024-07-02T00:42:49.196265583Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jul 2 00:42:49.205614 env[1216]: time="2024-07-02T00:42:49.205577071Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 00:42:50.215994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4068916643.mount: Deactivated successfully. Jul 2 00:42:50.614263 env[1216]: time="2024-07-02T00:42:50.614153717Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:50.615697 env[1216]: time="2024-07-02T00:42:50.615665563Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:50.617192 env[1216]: time="2024-07-02T00:42:50.617164415Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:50.618403 env[1216]: time="2024-07-02T00:42:50.618370351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:50.618881 env[1216]: time="2024-07-02T00:42:50.618855204Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jul 2 00:42:50.627982 env[1216]: time="2024-07-02T00:42:50.627952147Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:42:51.216689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206600972.mount: Deactivated successfully. Jul 2 00:42:52.112755 env[1216]: time="2024-07-02T00:42:52.112704345Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:52.114290 env[1216]: time="2024-07-02T00:42:52.114252493Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:52.116231 env[1216]: time="2024-07-02T00:42:52.116196496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:52.118748 env[1216]: time="2024-07-02T00:42:52.118718111Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:52.119479 env[1216]: time="2024-07-02T00:42:52.119451446Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 00:42:52.129114 env[1216]: time="2024-07-02T00:42:52.129069173Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:42:52.520421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2478997653.mount: Deactivated successfully. Jul 2 00:42:52.523826 env[1216]: time="2024-07-02T00:42:52.523784837Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:52.525060 env[1216]: time="2024-07-02T00:42:52.525027605Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:52.526613 env[1216]: time="2024-07-02T00:42:52.526590096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:52.536392 env[1216]: time="2024-07-02T00:42:52.536355992Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:52.537057 env[1216]: time="2024-07-02T00:42:52.537016247Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 00:42:52.548509 env[1216]: time="2024-07-02T00:42:52.548477496Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:42:53.093076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095640724.mount: Deactivated successfully. Jul 2 00:42:54.869402 env[1216]: time="2024-07-02T00:42:54.869357428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:54.870768 env[1216]: time="2024-07-02T00:42:54.870737465Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:54.873028 env[1216]: time="2024-07-02T00:42:54.873000911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:54.875925 env[1216]: time="2024-07-02T00:42:54.875899049Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:42:54.876679 env[1216]: time="2024-07-02T00:42:54.876647484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 00:42:58.387405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:42:58.387587 systemd[1]: Stopped kubelet.service. Jul 2 00:42:58.388988 systemd[1]: Starting kubelet.service... Jul 2 00:42:58.472098 systemd[1]: Started kubelet.service. Jul 2 00:42:58.513291 kubelet[1600]: E0702 00:42:58.513244 1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:42:58.515643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:42:58.515773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:43:00.680547 systemd[1]: Stopped kubelet.service. Jul 2 00:43:00.682439 systemd[1]: Starting kubelet.service... Jul 2 00:43:00.699702 systemd[1]: Reloading. Jul 2 00:43:00.757806 /usr/lib/systemd/system-generators/torcx-generator[1634]: time="2024-07-02T00:43:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:43:00.757838 /usr/lib/systemd/system-generators/torcx-generator[1634]: time="2024-07-02T00:43:00Z" level=info msg="torcx already run" Jul 2 00:43:00.834086 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:43:00.834108 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:43:00.849546 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:43:00.914663 systemd[1]: Started kubelet.service. Jul 2 00:43:00.915956 systemd[1]: Stopping kubelet.service... Jul 2 00:43:00.916334 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:43:00.916512 systemd[1]: Stopped kubelet.service. Jul 2 00:43:00.918016 systemd[1]: Starting kubelet.service... Jul 2 00:43:00.993710 systemd[1]: Started kubelet.service. Jul 2 00:43:01.044125 kubelet[1679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:43:01.044125 kubelet[1679]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:43:01.044125 kubelet[1679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:43:01.044475 kubelet[1679]: I0702 00:43:01.044182 1679 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:43:01.436313 kubelet[1679]: I0702 00:43:01.436214 1679 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:43:01.436313 kubelet[1679]: I0702 00:43:01.436247 1679 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:43:01.436459 kubelet[1679]: I0702 00:43:01.436445 1679 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:43:01.468027 kubelet[1679]: E0702 00:43:01.467995 1679 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:01.469501 kubelet[1679]: I0702 00:43:01.469379 1679 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:43:01.476506 kubelet[1679]: I0702 00:43:01.476477 1679 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:43:01.478304 kubelet[1679]: I0702 00:43:01.478265 1679 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:43:01.478646 kubelet[1679]: I0702 00:43:01.478622 1679 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:43:01.478783 kubelet[1679]: I0702 00:43:01.478769 1679 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:43:01.478849 kubelet[1679]: I0702 00:43:01.478839 1679 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:43:01.479046 kubelet[1679]: I0702 00:43:01.479026 1679 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:43:01.481507 kubelet[1679]: I0702 00:43:01.481483 1679 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:43:01.481636 kubelet[1679]: I0702 00:43:01.481624 1679 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:43:01.481737 kubelet[1679]: I0702 00:43:01.481724 1679 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:43:01.481813 kubelet[1679]: I0702 00:43:01.481803 1679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:43:01.482078 kubelet[1679]: W0702 00:43:01.482001 1679 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:01.482078 kubelet[1679]: E0702 00:43:01.482060 1679 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:01.482879 kubelet[1679]: W0702 00:43:01.482830 1679 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:01.482954 kubelet[1679]: E0702 00:43:01.482884 1679 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:01.484426 kubelet[1679]: I0702 00:43:01.484400 1679 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 00:43:01.485834 kubelet[1679]: I0702 00:43:01.485804 1679 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:43:01.490143 kubelet[1679]: W0702 00:43:01.488445 1679 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:43:01.490143 kubelet[1679]: I0702 00:43:01.490119 1679 server.go:1256] "Started kubelet" Jul 2 00:43:01.490779 kubelet[1679]: I0702 00:43:01.490749 1679 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:43:01.491092 kubelet[1679]: I0702 00:43:01.491066 1679 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:43:01.491392 kubelet[1679]: I0702 00:43:01.491370 1679 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:43:01.493309 kubelet[1679]: I0702 00:43:01.493280 1679 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:43:01.494617 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 00:43:01.497611 kubelet[1679]: I0702 00:43:01.497572 1679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:43:01.498524 kubelet[1679]: E0702 00:43:01.498498 1679 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:43:01.498599 kubelet[1679]: I0702 00:43:01.498591 1679 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:43:01.498710 kubelet[1679]: I0702 00:43:01.498690 1679 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:43:01.498780 kubelet[1679]: I0702 00:43:01.498755 1679 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:43:01.499108 kubelet[1679]: W0702 00:43:01.499056 1679 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:01.499108 kubelet[1679]: E0702 00:43:01.499107 1679 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:01.499596 kubelet[1679]: E0702 00:43:01.499371 1679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Jul 2 00:43:01.499710 kubelet[1679]: I0702 00:43:01.499691 1679 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:43:01.499807 kubelet[1679]: I0702 00:43:01.499779 1679 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:43:01.503592 kubelet[1679]: I0702 00:43:01.503556 1679 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:43:01.504461 kubelet[1679]: E0702 00:43:01.504415 1679 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de3ea3edaad24c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 00:43:01.490094668 +0000 UTC m=+0.492876883,LastTimestamp:2024-07-02 00:43:01.490094668 +0000 UTC m=+0.492876883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 00:43:01.515573 kubelet[1679]: I0702 00:43:01.515546 1679 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:43:01.515573 kubelet[1679]: I0702 00:43:01.515567 1679 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:43:01.515712 kubelet[1679]: I0702 00:43:01.515586 1679 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:43:01.516196 kubelet[1679]: I0702 00:43:01.516181 1679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:43:01.517111 kubelet[1679]: I0702 00:43:01.517087 1679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:43:01.517111 kubelet[1679]: I0702 00:43:01.517109 1679 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:43:01.517293 kubelet[1679]: I0702 00:43:01.517124 1679 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:43:01.517293 kubelet[1679]: E0702 00:43:01.517188 1679 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:43:01.519354 kubelet[1679]: W0702 00:43:01.519323 1679 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:01.519422 kubelet[1679]: E0702 00:43:01.519359 1679 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:01.575437 kubelet[1679]: I0702 00:43:01.575392 1679 policy_none.go:49] "None policy: Start" Jul 2 00:43:01.576266 kubelet[1679]: I0702 00:43:01.576233 1679 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:43:01.576339 kubelet[1679]: I0702 00:43:01.576292 1679 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:43:01.581736 systemd[1]: Created slice kubepods.slice. Jul 2 00:43:01.585655 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 00:43:01.588076 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 00:43:01.594896 kubelet[1679]: I0702 00:43:01.594864 1679 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:43:01.595173 kubelet[1679]: I0702 00:43:01.595084 1679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:43:01.596545 kubelet[1679]: E0702 00:43:01.596528 1679 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:43:01.599959 kubelet[1679]: I0702 00:43:01.599930 1679 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:43:01.600474 kubelet[1679]: E0702 00:43:01.600446 1679 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 2 00:43:01.617730 kubelet[1679]: I0702 00:43:01.617698 1679 topology_manager.go:215] "Topology Admit Handler" podUID="ec90f202a412747a2b1d50a20dd4050d" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:43:01.620636 kubelet[1679]: I0702 00:43:01.620606 1679 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:43:01.621732 kubelet[1679]: I0702 00:43:01.621706 1679 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:43:01.626157 systemd[1]: Created slice kubepods-burstable-podec90f202a412747a2b1d50a20dd4050d.slice. Jul 2 00:43:01.639054 systemd[1]: Created slice kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice. Jul 2 00:43:01.642655 systemd[1]: Created slice kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice. Jul 2 00:43:01.699352 kubelet[1679]: I0702 00:43:01.699241 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:43:01.700571 kubelet[1679]: E0702 00:43:01.700545 1679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Jul 2 00:43:01.799824 kubelet[1679]: I0702 00:43:01.799798 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:43:01.799978 kubelet[1679]: I0702 00:43:01.799963 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:43:01.800083 kubelet[1679]: I0702 00:43:01.800071 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:43:01.800202 kubelet[1679]: I0702 00:43:01.800187 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:43:01.800288 kubelet[1679]: I0702 00:43:01.800270 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:43:01.800372 kubelet[1679]: I0702 00:43:01.800361 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:43:01.800452 kubelet[1679]: I0702 00:43:01.800442 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:43:01.800550 kubelet[1679]: I0702 00:43:01.800539 1679 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:43:01.801543 kubelet[1679]: I0702 00:43:01.801519 1679 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:43:01.801894 kubelet[1679]: E0702 00:43:01.801851 1679 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 2 00:43:01.938838 kubelet[1679]: E0702 00:43:01.938790 1679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:01.939625 env[1216]: time="2024-07-02T00:43:01.939579467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ec90f202a412747a2b1d50a20dd4050d,Namespace:kube-system,Attempt:0,}" Jul 2 00:43:01.941461 kubelet[1679]: E0702 00:43:01.941423 1679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:01.941847 env[1216]: time="2024-07-02T00:43:01.941794115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,}" Jul 2 00:43:01.944296 kubelet[1679]: E0702 00:43:01.944277 1679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:01.944740 env[1216]: time="2024-07-02T00:43:01.944698916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,}" Jul 2 00:43:02.101274 kubelet[1679]: E0702 00:43:02.101182 1679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Jul 2 00:43:02.203997 kubelet[1679]: I0702 00:43:02.203971 1679 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:43:02.204500 kubelet[1679]: E0702 00:43:02.204481 1679 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 2 00:43:02.446988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount30089531.mount: Deactivated successfully. Jul 2 00:43:02.452467 env[1216]: time="2024-07-02T00:43:02.452419535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.453823 env[1216]: time="2024-07-02T00:43:02.453784354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.455439 env[1216]: time="2024-07-02T00:43:02.455408513Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.456282 env[1216]: time="2024-07-02T00:43:02.456257736Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.457191 env[1216]: time="2024-07-02T00:43:02.457161702Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.458952 env[1216]: time="2024-07-02T00:43:02.458922820Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.462412 env[1216]: time="2024-07-02T00:43:02.462383945Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.464730 env[1216]: time="2024-07-02T00:43:02.464692136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.466151 env[1216]: time="2024-07-02T00:43:02.466111578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.467273 env[1216]: time="2024-07-02T00:43:02.467221262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.468104 env[1216]: time="2024-07-02T00:43:02.468077133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.469037 env[1216]: time="2024-07-02T00:43:02.469008570Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:02.515160 env[1216]: time="2024-07-02T00:43:02.515092856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:43:02.515310 env[1216]: time="2024-07-02T00:43:02.515138669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:43:02.515310 env[1216]: time="2024-07-02T00:43:02.515150283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:43:02.515773 env[1216]: time="2024-07-02T00:43:02.515618304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:43:02.515773 env[1216]: time="2024-07-02T00:43:02.515645175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:43:02.515773 env[1216]: time="2024-07-02T00:43:02.515654947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:43:02.515773 env[1216]: time="2024-07-02T00:43:02.515504052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:43:02.515773 env[1216]: time="2024-07-02T00:43:02.515539733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:43:02.515773 env[1216]: time="2024-07-02T00:43:02.515550225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:43:02.515773 env[1216]: time="2024-07-02T00:43:02.515695914Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/229207d5fe4f640850bd081a95833540a5dc15ddaafde3cdb1cb975aa8dd5fc4 pid=1737 runtime=io.containerd.runc.v2 Jul 2 00:43:02.516036 env[1216]: time="2024-07-02T00:43:02.515809966Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03632155fce5387b902a90a00f5d325c83476221e31663afbb11bf3122adacd2 pid=1731 runtime=io.containerd.runc.v2 Jul 2 00:43:02.516036 env[1216]: time="2024-07-02T00:43:02.515848170Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9169321d5a666027385047cffe16a5b071b530f9c317ad744333923c31f1608 pid=1732 runtime=io.containerd.runc.v2 Jul 2 00:43:02.530567 systemd[1]: Started cri-containerd-229207d5fe4f640850bd081a95833540a5dc15ddaafde3cdb1cb975aa8dd5fc4.scope. Jul 2 00:43:02.533113 systemd[1]: Started cri-containerd-f9169321d5a666027385047cffe16a5b071b530f9c317ad744333923c31f1608.scope. Jul 2 00:43:02.536514 systemd[1]: Started cri-containerd-03632155fce5387b902a90a00f5d325c83476221e31663afbb11bf3122adacd2.scope. Jul 2 00:43:02.606257 env[1216]: time="2024-07-02T00:43:02.606215618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ec90f202a412747a2b1d50a20dd4050d,Namespace:kube-system,Attempt:0,} returns sandbox id \"229207d5fe4f640850bd081a95833540a5dc15ddaafde3cdb1cb975aa8dd5fc4\"" Jul 2 00:43:02.610699 kubelet[1679]: E0702 00:43:02.610672 1679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:02.613354 env[1216]: time="2024-07-02T00:43:02.613319558Z" level=info msg="CreateContainer within sandbox \"229207d5fe4f640850bd081a95833540a5dc15ddaafde3cdb1cb975aa8dd5fc4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:43:02.621296 env[1216]: time="2024-07-02T00:43:02.621259826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"03632155fce5387b902a90a00f5d325c83476221e31663afbb11bf3122adacd2\"" Jul 2 00:43:02.622345 env[1216]: time="2024-07-02T00:43:02.622316248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9169321d5a666027385047cffe16a5b071b530f9c317ad744333923c31f1608\"" Jul 2 00:43:02.622836 kubelet[1679]: E0702 00:43:02.622712 1679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:02.623696 kubelet[1679]: E0702 00:43:02.623634 1679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:02.625120 env[1216]: time="2024-07-02T00:43:02.625079446Z" level=info msg="CreateContainer within sandbox \"03632155fce5387b902a90a00f5d325c83476221e31663afbb11bf3122adacd2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:43:02.626814 env[1216]: time="2024-07-02T00:43:02.626779333Z" level=info msg="CreateContainer within sandbox \"f9169321d5a666027385047cffe16a5b071b530f9c317ad744333923c31f1608\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:43:02.634486 env[1216]: time="2024-07-02T00:43:02.634428143Z" level=info msg="CreateContainer within sandbox \"229207d5fe4f640850bd081a95833540a5dc15ddaafde3cdb1cb975aa8dd5fc4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"389e4272a44bf67bef4049624f2364000bf8c8180a3fad277039bfb69d20b8dd\"" Jul 2 00:43:02.635094 env[1216]: time="2024-07-02T00:43:02.635066402Z" level=info msg="StartContainer for \"389e4272a44bf67bef4049624f2364000bf8c8180a3fad277039bfb69d20b8dd\"" Jul 2 00:43:02.638227 env[1216]: time="2024-07-02T00:43:02.638194982Z" level=info msg="CreateContainer within sandbox \"03632155fce5387b902a90a00f5d325c83476221e31663afbb11bf3122adacd2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e21b3c1106fa315309dadba1f614e7ae2817211717d34dfaee3ec2e940b0cd79\"" Jul 2 00:43:02.638603 kubelet[1679]: W0702 00:43:02.638526 1679 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:02.638603 kubelet[1679]: E0702 00:43:02.638581 1679 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:02.638853 env[1216]: time="2024-07-02T00:43:02.638818463Z" level=info msg="StartContainer for \"e21b3c1106fa315309dadba1f614e7ae2817211717d34dfaee3ec2e940b0cd79\"" Jul 2 00:43:02.640048 env[1216]: time="2024-07-02T00:43:02.640005437Z" level=info msg="CreateContainer within sandbox \"f9169321d5a666027385047cffe16a5b071b530f9c317ad744333923c31f1608\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8df028f86e3ee281e6abeb626ee97dda07ac202741a637469545af724bc33f5a\"" Jul 2 00:43:02.640397 env[1216]: time="2024-07-02T00:43:02.640372862Z" level=info msg="StartContainer for \"8df028f86e3ee281e6abeb626ee97dda07ac202741a637469545af724bc33f5a\"" Jul 2 00:43:02.651748 systemd[1]: Started cri-containerd-389e4272a44bf67bef4049624f2364000bf8c8180a3fad277039bfb69d20b8dd.scope. Jul 2 00:43:02.656767 systemd[1]: Started cri-containerd-e21b3c1106fa315309dadba1f614e7ae2817211717d34dfaee3ec2e940b0cd79.scope. Jul 2 00:43:02.664320 systemd[1]: Started cri-containerd-8df028f86e3ee281e6abeb626ee97dda07ac202741a637469545af724bc33f5a.scope. Jul 2 00:43:02.677662 kubelet[1679]: W0702 00:43:02.677614 1679 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:02.677768 kubelet[1679]: E0702 00:43:02.677670 1679 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:02.724798 env[1216]: time="2024-07-02T00:43:02.723630963Z" level=info msg="StartContainer for \"e21b3c1106fa315309dadba1f614e7ae2817211717d34dfaee3ec2e940b0cd79\" returns successfully" Jul 2 00:43:02.725270 env[1216]: time="2024-07-02T00:43:02.725240225Z" level=info msg="StartContainer for \"389e4272a44bf67bef4049624f2364000bf8c8180a3fad277039bfb69d20b8dd\" returns successfully" Jul 2 00:43:02.727259 env[1216]: time="2024-07-02T00:43:02.727229207Z" level=info msg="StartContainer for \"8df028f86e3ee281e6abeb626ee97dda07ac202741a637469545af724bc33f5a\" returns successfully" Jul 2 00:43:02.768638 kubelet[1679]: W0702 00:43:02.768578 1679 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:02.768638 kubelet[1679]: E0702 00:43:02.768634 1679 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 00:43:02.903000 kubelet[1679]: E0702 00:43:02.902961 1679 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="1.6s" Jul 2 00:43:03.006120 kubelet[1679]: I0702 00:43:03.005806 1679 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:43:03.523739 kubelet[1679]: E0702 00:43:03.523703 1679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:03.525764 kubelet[1679]: E0702 00:43:03.525737 1679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:03.528095 kubelet[1679]: E0702 00:43:03.528068 1679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:04.158256 kubelet[1679]: I0702 00:43:04.158221 1679 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:43:04.177219 kubelet[1679]: E0702 00:43:04.177186 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:04.277752 kubelet[1679]: E0702 00:43:04.277717 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:04.378354 kubelet[1679]: E0702 00:43:04.378321 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:04.479230 kubelet[1679]: E0702 00:43:04.479126 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:04.529875 kubelet[1679]: E0702 00:43:04.529846 1679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:04.579882 kubelet[1679]: E0702 00:43:04.579855 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:04.681261 kubelet[1679]: E0702 00:43:04.681214 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:04.782274 kubelet[1679]: E0702 00:43:04.782124 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:04.882910 kubelet[1679]: E0702 00:43:04.882865 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:04.983988 kubelet[1679]: E0702 00:43:04.983926 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:05.084275 kubelet[1679]: E0702 00:43:05.084083 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:05.184481 kubelet[1679]: E0702 00:43:05.184442 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:05.285388 kubelet[1679]: E0702 00:43:05.285350 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:05.386069 kubelet[1679]: E0702 00:43:05.385971 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:05.452224 kubelet[1679]: E0702 00:43:05.452194 1679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:05.486959 kubelet[1679]: E0702 00:43:05.486922 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:05.587292 kubelet[1679]: E0702 00:43:05.587247 1679 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:06.484350 kubelet[1679]: I0702 00:43:06.484303 1679 apiserver.go:52] "Watching apiserver" Jul 2 00:43:06.499502 kubelet[1679]: I0702 00:43:06.499448 1679 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:43:06.651045 systemd[1]: Reloading. Jul 2 00:43:06.700423 /usr/lib/systemd/system-generators/torcx-generator[1983]: time="2024-07-02T00:43:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:43:06.700843 /usr/lib/systemd/system-generators/torcx-generator[1983]: time="2024-07-02T00:43:06Z" level=info msg="torcx already run" Jul 2 00:43:06.774925 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:43:06.774944 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:43:06.792022 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:43:06.876674 kubelet[1679]: I0702 00:43:06.876633 1679 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:43:06.876849 systemd[1]: Stopping kubelet.service... Jul 2 00:43:06.893679 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:43:06.893884 systemd[1]: Stopped kubelet.service. Jul 2 00:43:06.896283 systemd[1]: Starting kubelet.service... Jul 2 00:43:06.973810 systemd[1]: Started kubelet.service. Jul 2 00:43:07.027773 kubelet[2025]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:43:07.027773 kubelet[2025]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:43:07.027773 kubelet[2025]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:43:07.028106 kubelet[2025]: I0702 00:43:07.027745 2025 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:43:07.033876 kubelet[2025]: I0702 00:43:07.033830 2025 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:43:07.033876 kubelet[2025]: I0702 00:43:07.033865 2025 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:43:07.034162 kubelet[2025]: I0702 00:43:07.034139 2025 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:43:07.037001 kubelet[2025]: I0702 00:43:07.036972 2025 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:43:07.038719 kubelet[2025]: I0702 00:43:07.038694 2025 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:43:07.045380 kubelet[2025]: I0702 00:43:07.045358 2025 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:43:07.045559 kubelet[2025]: I0702 00:43:07.045545 2025 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:43:07.045714 kubelet[2025]: I0702 00:43:07.045700 2025 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:43:07.045818 kubelet[2025]: I0702 00:43:07.045720 2025 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:43:07.045818 kubelet[2025]: I0702 00:43:07.045729 2025 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:43:07.045818 kubelet[2025]: I0702 00:43:07.045768 2025 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:43:07.045958 kubelet[2025]: I0702 00:43:07.045940 2025 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:43:07.046007 kubelet[2025]: I0702 00:43:07.045963 2025 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:43:07.046033 kubelet[2025]: I0702 00:43:07.046026 2025 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:43:07.046065 kubelet[2025]: I0702 00:43:07.046041 2025 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:43:07.049223 sudo[2040]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:43:07.049441 sudo[2040]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:43:07.050125 kubelet[2025]: I0702 00:43:07.050088 2025 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 00:43:07.050350 kubelet[2025]: I0702 00:43:07.050327 2025 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:43:07.052420 kubelet[2025]: I0702 00:43:07.051679 2025 server.go:1256] "Started kubelet" Jul 2 00:43:07.052420 kubelet[2025]: I0702 00:43:07.052073 2025 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:43:07.052420 kubelet[2025]: I0702 00:43:07.052184 2025 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:43:07.052420 kubelet[2025]: I0702 00:43:07.052421 2025 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:43:07.052837 kubelet[2025]: I0702 00:43:07.052799 2025 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:43:07.054793 kubelet[2025]: I0702 00:43:07.054761 2025 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:43:07.070771 kubelet[2025]: E0702 00:43:07.070738 2025 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:43:07.070771 kubelet[2025]: I0702 00:43:07.070778 2025 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:43:07.071016 kubelet[2025]: I0702 00:43:07.070874 2025 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:43:07.071016 kubelet[2025]: I0702 00:43:07.071012 2025 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:43:07.075347 kubelet[2025]: I0702 00:43:07.075321 2025 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:43:07.075930 kubelet[2025]: I0702 00:43:07.075897 2025 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:43:07.077032 kubelet[2025]: I0702 00:43:07.077012 2025 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:43:07.080108 kubelet[2025]: E0702 00:43:07.080028 2025 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:43:07.080366 kubelet[2025]: I0702 00:43:07.080326 2025 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:43:07.082063 kubelet[2025]: I0702 00:43:07.082041 2025 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:43:07.084703 kubelet[2025]: I0702 00:43:07.084673 2025 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:43:07.084902 kubelet[2025]: I0702 00:43:07.084885 2025 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:43:07.085867 kubelet[2025]: E0702 00:43:07.085808 2025 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:43:07.125704 kubelet[2025]: I0702 00:43:07.125677 2025 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:43:07.125704 kubelet[2025]: I0702 00:43:07.125703 2025 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:43:07.125935 kubelet[2025]: I0702 00:43:07.125723 2025 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:43:07.125935 kubelet[2025]: I0702 00:43:07.125925 2025 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:43:07.126047 kubelet[2025]: I0702 00:43:07.125950 2025 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:43:07.126047 kubelet[2025]: I0702 00:43:07.125957 2025 policy_none.go:49] "None policy: Start" Jul 2 00:43:07.126762 kubelet[2025]: I0702 00:43:07.126742 2025 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:43:07.126937 kubelet[2025]: I0702 00:43:07.126923 2025 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:43:07.127319 kubelet[2025]: I0702 00:43:07.127302 2025 state_mem.go:75] "Updated machine memory state" Jul 2 00:43:07.131475 kubelet[2025]: I0702 00:43:07.131451 2025 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:43:07.131679 kubelet[2025]: I0702 00:43:07.131663 2025 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:43:07.174036 kubelet[2025]: I0702 00:43:07.174005 2025 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:43:07.181529 kubelet[2025]: I0702 00:43:07.181358 2025 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 00:43:07.181529 kubelet[2025]: I0702 00:43:07.181441 2025 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:43:07.186546 kubelet[2025]: I0702 00:43:07.186523 2025 topology_manager.go:215] "Topology Admit Handler" podUID="ec90f202a412747a2b1d50a20dd4050d" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:43:07.186765 kubelet[2025]: I0702 00:43:07.186747 2025 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:43:07.187510 kubelet[2025]: I0702 00:43:07.187492 2025 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:43:07.271302 kubelet[2025]: I0702 00:43:07.271267 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:43:07.271510 kubelet[2025]: I0702 00:43:07.271497 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:43:07.271616 kubelet[2025]: I0702 00:43:07.271603 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:43:07.271710 kubelet[2025]: I0702 00:43:07.271700 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:43:07.271798 kubelet[2025]: I0702 00:43:07.271789 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:43:07.271899 kubelet[2025]: I0702 00:43:07.271888 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:43:07.272022 kubelet[2025]: I0702 00:43:07.272011 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:43:07.272124 kubelet[2025]: I0702 00:43:07.272114 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:43:07.272271 kubelet[2025]: I0702 00:43:07.272259 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:43:07.495699 kubelet[2025]: E0702 00:43:07.495602 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:07.496528 kubelet[2025]: E0702 00:43:07.496505 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:07.497240 kubelet[2025]: E0702 00:43:07.497174 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:07.503977 sudo[2040]: pam_unix(sudo:session): session closed for user root Jul 2 00:43:08.046714 kubelet[2025]: I0702 00:43:08.046657 2025 apiserver.go:52] "Watching apiserver" Jul 2 00:43:08.071844 kubelet[2025]: I0702 00:43:08.071787 2025 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:43:08.102768 kubelet[2025]: E0702 00:43:08.102735 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:08.105571 kubelet[2025]: E0702 00:43:08.105526 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:08.109437 kubelet[2025]: E0702 00:43:08.108562 2025 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 00:43:08.109437 kubelet[2025]: E0702 00:43:08.108986 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:08.128183 kubelet[2025]: I0702 00:43:08.128152 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.128098358 podStartE2EDuration="1.128098358s" podCreationTimestamp="2024-07-02 00:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:43:08.121543232 +0000 UTC m=+1.143911760" watchObservedRunningTime="2024-07-02 00:43:08.128098358 +0000 UTC m=+1.150466886" Jul 2 00:43:08.128313 kubelet[2025]: I0702 00:43:08.128248 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.128229427 podStartE2EDuration="1.128229427s" podCreationTimestamp="2024-07-02 00:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:43:08.127951562 +0000 UTC m=+1.150320090" watchObservedRunningTime="2024-07-02 00:43:08.128229427 +0000 UTC m=+1.150597955" Jul 2 00:43:09.104576 kubelet[2025]: E0702 00:43:09.104550 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:09.170436 sudo[1320]: pam_unix(sudo:session): session closed for user root Jul 2 00:43:09.172260 sshd[1316]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:09.174826 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:56078.service: Deactivated successfully. Jul 2 00:43:09.175794 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:43:09.175945 systemd[1]: session-5.scope: Consumed 7.981s CPU time. Jul 2 00:43:09.176808 systemd-logind[1205]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:43:09.177734 systemd-logind[1205]: Removed session 5. Jul 2 00:43:10.106150 kubelet[2025]: E0702 00:43:10.106111 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:12.554931 kubelet[2025]: E0702 00:43:12.554849 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:12.568179 kubelet[2025]: I0702 00:43:12.568144 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.568094594 podStartE2EDuration="5.568094594s" podCreationTimestamp="2024-07-02 00:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:43:08.133611223 +0000 UTC m=+1.155979751" watchObservedRunningTime="2024-07-02 00:43:12.568094594 +0000 UTC m=+5.590463122" Jul 2 00:43:13.110383 kubelet[2025]: E0702 00:43:13.110358 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:17.159906 kubelet[2025]: E0702 00:43:17.159868 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:18.118135 kubelet[2025]: E0702 00:43:18.118092 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:18.984158 kubelet[2025]: E0702 00:43:18.983889 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:21.280001 kubelet[2025]: I0702 00:43:21.279966 2025 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:43:21.280499 env[1216]: time="2024-07-02T00:43:21.280413958Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:43:21.280688 kubelet[2025]: I0702 00:43:21.280625 2025 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:43:21.953196 kubelet[2025]: I0702 00:43:21.953155 2025 topology_manager.go:215] "Topology Admit Handler" podUID="3dd7618e-7a2d-4198-9320-408e788e6d62" podNamespace="kube-system" podName="kube-proxy-bvcpq" Jul 2 00:43:21.954082 kubelet[2025]: I0702 00:43:21.954041 2025 topology_manager.go:215] "Topology Admit Handler" podUID="e12c83a6-d6f9-417f-83b7-fef0196df593" podNamespace="kube-system" podName="cilium-w6jpl" Jul 2 00:43:21.958971 systemd[1]: Created slice kubepods-besteffort-pod3dd7618e_7a2d_4198_9320_408e788e6d62.slice. Jul 2 00:43:21.969556 systemd[1]: Created slice kubepods-burstable-pode12c83a6_d6f9_417f_83b7_fef0196df593.slice. Jul 2 00:43:21.976562 kubelet[2025]: I0702 00:43:21.976511 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-xtables-lock\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.976668 kubelet[2025]: I0702 00:43:21.976589 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brp72\" (UniqueName: \"kubernetes.io/projected/3dd7618e-7a2d-4198-9320-408e788e6d62-kube-api-access-brp72\") pod \"kube-proxy-bvcpq\" (UID: \"3dd7618e-7a2d-4198-9320-408e788e6d62\") " pod="kube-system/kube-proxy-bvcpq" Jul 2 00:43:21.976668 kubelet[2025]: I0702 00:43:21.976613 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-run\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.976732 kubelet[2025]: I0702 00:43:21.976672 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cni-path\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.976732 kubelet[2025]: I0702 00:43:21.976694 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e12c83a6-d6f9-417f-83b7-fef0196df593-hubble-tls\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.976732 kubelet[2025]: I0702 00:43:21.976713 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktpjl\" (UniqueName: \"kubernetes.io/projected/e12c83a6-d6f9-417f-83b7-fef0196df593-kube-api-access-ktpjl\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.976800 kubelet[2025]: I0702 00:43:21.976763 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3dd7618e-7a2d-4198-9320-408e788e6d62-kube-proxy\") pod \"kube-proxy-bvcpq\" (UID: \"3dd7618e-7a2d-4198-9320-408e788e6d62\") " pod="kube-system/kube-proxy-bvcpq" Jul 2 00:43:21.976823 kubelet[2025]: I0702 00:43:21.976810 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3dd7618e-7a2d-4198-9320-408e788e6d62-xtables-lock\") pod \"kube-proxy-bvcpq\" (UID: \"3dd7618e-7a2d-4198-9320-408e788e6d62\") " pod="kube-system/kube-proxy-bvcpq" Jul 2 00:43:21.976864 kubelet[2025]: I0702 00:43:21.976834 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3dd7618e-7a2d-4198-9320-408e788e6d62-lib-modules\") pod \"kube-proxy-bvcpq\" (UID: \"3dd7618e-7a2d-4198-9320-408e788e6d62\") " pod="kube-system/kube-proxy-bvcpq" Jul 2 00:43:21.976891 kubelet[2025]: I0702 00:43:21.976879 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-hostproc\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.976915 kubelet[2025]: I0702 00:43:21.976900 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-cgroup\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.976941 kubelet[2025]: I0702 00:43:21.976921 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-etc-cni-netd\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.976985 kubelet[2025]: I0702 00:43:21.976965 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-host-proc-sys-kernel\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.977022 kubelet[2025]: I0702 00:43:21.976993 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-bpf-maps\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.977022 kubelet[2025]: I0702 00:43:21.977021 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-lib-modules\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.977078 kubelet[2025]: I0702 00:43:21.977056 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-host-proc-sys-net\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.977105 kubelet[2025]: I0702 00:43:21.977078 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e12c83a6-d6f9-417f-83b7-fef0196df593-clustermesh-secrets\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.977149 kubelet[2025]: I0702 00:43:21.977108 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-config-path\") pod \"cilium-w6jpl\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " pod="kube-system/cilium-w6jpl" Jul 2 00:43:21.995572 update_engine[1206]: I0702 00:43:21.995179 1206 update_attempter.cc:509] Updating boot flags... Jul 2 00:43:22.267763 kubelet[2025]: E0702 00:43:22.267729 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:22.268416 env[1216]: time="2024-07-02T00:43:22.268370024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bvcpq,Uid:3dd7618e-7a2d-4198-9320-408e788e6d62,Namespace:kube-system,Attempt:0,}" Jul 2 00:43:22.273165 kubelet[2025]: E0702 00:43:22.273122 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:22.273971 env[1216]: time="2024-07-02T00:43:22.273742505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w6jpl,Uid:e12c83a6-d6f9-417f-83b7-fef0196df593,Namespace:kube-system,Attempt:0,}" Jul 2 00:43:22.289356 env[1216]: time="2024-07-02T00:43:22.289182671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:43:22.289356 env[1216]: time="2024-07-02T00:43:22.289228365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:43:22.289356 env[1216]: time="2024-07-02T00:43:22.289241328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:43:22.289697 env[1216]: time="2024-07-02T00:43:22.289453710Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee235547f50096b316885cf6a70cc131b4921f2430a96051c61f7c6976918af8 pid=2134 runtime=io.containerd.runc.v2 Jul 2 00:43:22.292590 env[1216]: time="2024-07-02T00:43:22.292413290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:43:22.292590 env[1216]: time="2024-07-02T00:43:22.292553491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:43:22.292590 env[1216]: time="2024-07-02T00:43:22.292564534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:43:22.292921 env[1216]: time="2024-07-02T00:43:22.292884947Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf pid=2152 runtime=io.containerd.runc.v2 Jul 2 00:43:22.305168 systemd[1]: Started cri-containerd-685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf.scope. Jul 2 00:43:22.309856 systemd[1]: Started cri-containerd-ee235547f50096b316885cf6a70cc131b4921f2430a96051c61f7c6976918af8.scope. Jul 2 00:43:22.358761 env[1216]: time="2024-07-02T00:43:22.358720157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w6jpl,Uid:e12c83a6-d6f9-417f-83b7-fef0196df593,Namespace:kube-system,Attempt:0,} returns sandbox id \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\"" Jul 2 00:43:22.359476 kubelet[2025]: E0702 00:43:22.359450 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:22.360716 env[1216]: time="2024-07-02T00:43:22.360685288Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:43:22.369222 env[1216]: time="2024-07-02T00:43:22.369190640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bvcpq,Uid:3dd7618e-7a2d-4198-9320-408e788e6d62,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee235547f50096b316885cf6a70cc131b4921f2430a96051c61f7c6976918af8\"" Jul 2 00:43:22.370097 kubelet[2025]: E0702 00:43:22.370076 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:22.372337 env[1216]: time="2024-07-02T00:43:22.372303584Z" level=info msg="CreateContainer within sandbox \"ee235547f50096b316885cf6a70cc131b4921f2430a96051c61f7c6976918af8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:43:22.392176 env[1216]: time="2024-07-02T00:43:22.389891735Z" level=info msg="CreateContainer within sandbox \"ee235547f50096b316885cf6a70cc131b4921f2430a96051c61f7c6976918af8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c2b93325c832bf92d8b92cf8da442e0585d81f0d319d5d1091464e64946a90da\"" Jul 2 00:43:22.392176 env[1216]: time="2024-07-02T00:43:22.390758747Z" level=info msg="StartContainer for \"c2b93325c832bf92d8b92cf8da442e0585d81f0d319d5d1091464e64946a90da\"" Jul 2 00:43:22.420753 kubelet[2025]: I0702 00:43:22.420704 2025 topology_manager.go:215] "Topology Admit Handler" podUID="123230f8-b95c-43f9-ae22-20cd9dde7043" podNamespace="kube-system" podName="cilium-operator-5cc964979-kc99p" Jul 2 00:43:22.426621 systemd[1]: Created slice kubepods-besteffort-pod123230f8_b95c_43f9_ae22_20cd9dde7043.slice. Jul 2 00:43:22.450960 systemd[1]: Started cri-containerd-c2b93325c832bf92d8b92cf8da442e0585d81f0d319d5d1091464e64946a90da.scope. Jul 2 00:43:22.480011 kubelet[2025]: I0702 00:43:22.479971 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/123230f8-b95c-43f9-ae22-20cd9dde7043-cilium-config-path\") pod \"cilium-operator-5cc964979-kc99p\" (UID: \"123230f8-b95c-43f9-ae22-20cd9dde7043\") " pod="kube-system/cilium-operator-5cc964979-kc99p" Jul 2 00:43:22.480011 kubelet[2025]: I0702 00:43:22.480017 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdhbm\" (UniqueName: \"kubernetes.io/projected/123230f8-b95c-43f9-ae22-20cd9dde7043-kube-api-access-mdhbm\") pod \"cilium-operator-5cc964979-kc99p\" (UID: \"123230f8-b95c-43f9-ae22-20cd9dde7043\") " pod="kube-system/cilium-operator-5cc964979-kc99p" Jul 2 00:43:22.489896 env[1216]: time="2024-07-02T00:43:22.489838138Z" level=info msg="StartContainer for \"c2b93325c832bf92d8b92cf8da442e0585d81f0d319d5d1091464e64946a90da\" returns successfully" Jul 2 00:43:22.729491 kubelet[2025]: E0702 00:43:22.729452 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:22.730154 env[1216]: time="2024-07-02T00:43:22.730105674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-kc99p,Uid:123230f8-b95c-43f9-ae22-20cd9dde7043,Namespace:kube-system,Attempt:0,}" Jul 2 00:43:22.746271 env[1216]: time="2024-07-02T00:43:22.746168622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:43:22.746271 env[1216]: time="2024-07-02T00:43:22.746210434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:43:22.746626 env[1216]: time="2024-07-02T00:43:22.746468349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:43:22.746818 env[1216]: time="2024-07-02T00:43:22.746723423Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a pid=2307 runtime=io.containerd.runc.v2 Jul 2 00:43:22.757594 systemd[1]: Started cri-containerd-ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a.scope. Jul 2 00:43:22.802513 env[1216]: time="2024-07-02T00:43:22.802463700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-kc99p,Uid:123230f8-b95c-43f9-ae22-20cd9dde7043,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a\"" Jul 2 00:43:22.803167 kubelet[2025]: E0702 00:43:22.803122 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:23.131524 kubelet[2025]: E0702 00:43:23.131174 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:23.139637 kubelet[2025]: I0702 00:43:23.139581 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bvcpq" podStartSLOduration=2.139540577 podStartE2EDuration="2.139540577s" podCreationTimestamp="2024-07-02 00:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:43:23.139522492 +0000 UTC m=+16.161891020" watchObservedRunningTime="2024-07-02 00:43:23.139540577 +0000 UTC m=+16.161909105" Jul 2 00:43:26.023635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1288338579.mount: Deactivated successfully. Jul 2 00:43:28.316543 env[1216]: time="2024-07-02T00:43:28.316497167Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:28.317942 env[1216]: time="2024-07-02T00:43:28.317915956Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:28.319415 env[1216]: time="2024-07-02T00:43:28.319374795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:28.320065 env[1216]: time="2024-07-02T00:43:28.320038260Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 00:43:28.323432 env[1216]: time="2024-07-02T00:43:28.321825770Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:43:28.327955 env[1216]: time="2024-07-02T00:43:28.327914859Z" level=info msg="CreateContainer within sandbox \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:43:28.337048 env[1216]: time="2024-07-02T00:43:28.337014644Z" level=info msg="CreateContainer within sandbox \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579\"" Jul 2 00:43:28.337551 env[1216]: time="2024-07-02T00:43:28.337524996Z" level=info msg="StartContainer for \"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579\"" Jul 2 00:43:28.360117 systemd[1]: Started cri-containerd-5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579.scope. Jul 2 00:43:28.406157 env[1216]: time="2024-07-02T00:43:28.404241795Z" level=info msg="StartContainer for \"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579\" returns successfully" Jul 2 00:43:28.434706 systemd[1]: cri-containerd-5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579.scope: Deactivated successfully. Jul 2 00:43:28.580364 env[1216]: time="2024-07-02T00:43:28.580238804Z" level=info msg="shim disconnected" id=5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579 Jul 2 00:43:28.580364 env[1216]: time="2024-07-02T00:43:28.580286214Z" level=warning msg="cleaning up after shim disconnected" id=5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579 namespace=k8s.io Jul 2 00:43:28.580364 env[1216]: time="2024-07-02T00:43:28.580295256Z" level=info msg="cleaning up dead shim" Jul 2 00:43:28.587050 env[1216]: time="2024-07-02T00:43:28.586984956Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:43:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2461 runtime=io.containerd.runc.v2\n" Jul 2 00:43:29.143028 kubelet[2025]: E0702 00:43:29.142830 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:29.148615 env[1216]: time="2024-07-02T00:43:29.148571458Z" level=info msg="CreateContainer within sandbox \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:43:29.163043 env[1216]: time="2024-07-02T00:43:29.162983026Z" level=info msg="CreateContainer within sandbox \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c\"" Jul 2 00:43:29.163598 env[1216]: time="2024-07-02T00:43:29.163564067Z" level=info msg="StartContainer for \"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c\"" Jul 2 00:43:29.181044 systemd[1]: Started cri-containerd-69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c.scope. Jul 2 00:43:29.225797 env[1216]: time="2024-07-02T00:43:29.225745642Z" level=info msg="StartContainer for \"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c\" returns successfully" Jul 2 00:43:29.237779 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:43:29.238020 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:43:29.238358 systemd[1]: Stopping systemd-sysctl.service... Jul 2 00:43:29.239808 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:43:29.240957 systemd[1]: cri-containerd-69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c.scope: Deactivated successfully. Jul 2 00:43:29.251379 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:43:29.278011 env[1216]: time="2024-07-02T00:43:29.277959338Z" level=info msg="shim disconnected" id=69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c Jul 2 00:43:29.278011 env[1216]: time="2024-07-02T00:43:29.278009348Z" level=warning msg="cleaning up after shim disconnected" id=69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c namespace=k8s.io Jul 2 00:43:29.278011 env[1216]: time="2024-07-02T00:43:29.278020070Z" level=info msg="cleaning up dead shim" Jul 2 00:43:29.285538 env[1216]: time="2024-07-02T00:43:29.285496990Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:43:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2527 runtime=io.containerd.runc.v2\n" Jul 2 00:43:29.335241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579-rootfs.mount: Deactivated successfully. Jul 2 00:43:29.879891 env[1216]: time="2024-07-02T00:43:29.879845573Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:29.880880 env[1216]: time="2024-07-02T00:43:29.880853903Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:29.882166 env[1216]: time="2024-07-02T00:43:29.882134970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:43:29.882781 env[1216]: time="2024-07-02T00:43:29.882747578Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 00:43:29.885320 env[1216]: time="2024-07-02T00:43:29.885292789Z" level=info msg="CreateContainer within sandbox \"ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:43:29.894439 env[1216]: time="2024-07-02T00:43:29.894401370Z" level=info msg="CreateContainer within sandbox \"ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\"" Jul 2 00:43:29.895171 env[1216]: time="2024-07-02T00:43:29.895145245Z" level=info msg="StartContainer for \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\"" Jul 2 00:43:29.910381 systemd[1]: Started cri-containerd-e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206.scope. Jul 2 00:43:29.951667 env[1216]: time="2024-07-02T00:43:29.951623790Z" level=info msg="StartContainer for \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\" returns successfully" Jul 2 00:43:30.145052 kubelet[2025]: E0702 00:43:30.144956 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:30.146625 kubelet[2025]: E0702 00:43:30.146493 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:30.146807 env[1216]: time="2024-07-02T00:43:30.146768966Z" level=info msg="CreateContainer within sandbox \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:43:30.159931 env[1216]: time="2024-07-02T00:43:30.159871662Z" level=info msg="CreateContainer within sandbox \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a\"" Jul 2 00:43:30.160444 env[1216]: time="2024-07-02T00:43:30.160422332Z" level=info msg="StartContainer for \"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a\"" Jul 2 00:43:30.191440 systemd[1]: Started cri-containerd-3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a.scope. Jul 2 00:43:30.247002 env[1216]: time="2024-07-02T00:43:30.246948652Z" level=info msg="StartContainer for \"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a\" returns successfully" Jul 2 00:43:30.267302 systemd[1]: cri-containerd-3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a.scope: Deactivated successfully. Jul 2 00:43:30.350669 env[1216]: time="2024-07-02T00:43:30.350624436Z" level=info msg="shim disconnected" id=3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a Jul 2 00:43:30.350669 env[1216]: time="2024-07-02T00:43:30.350667525Z" level=warning msg="cleaning up after shim disconnected" id=3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a namespace=k8s.io Jul 2 00:43:30.350669 env[1216]: time="2024-07-02T00:43:30.350677687Z" level=info msg="cleaning up dead shim" Jul 2 00:43:30.360235 env[1216]: time="2024-07-02T00:43:30.360187106Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:43:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2620 runtime=io.containerd.runc.v2\n" Jul 2 00:43:31.150696 kubelet[2025]: E0702 00:43:31.150527 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:31.150696 kubelet[2025]: E0702 00:43:31.150576 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:31.153051 env[1216]: time="2024-07-02T00:43:31.152993516Z" level=info msg="CreateContainer within sandbox \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:43:31.164062 env[1216]: time="2024-07-02T00:43:31.164010664Z" level=info msg="CreateContainer within sandbox \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670\"" Jul 2 00:43:31.164664 env[1216]: time="2024-07-02T00:43:31.164631983Z" level=info msg="StartContainer for \"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670\"" Jul 2 00:43:31.170008 kubelet[2025]: I0702 00:43:31.169035 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-kc99p" podStartSLOduration=2.089689118 podStartE2EDuration="9.168992657s" podCreationTimestamp="2024-07-02 00:43:22 +0000 UTC" firstStartedPulling="2024-07-02 00:43:22.803623717 +0000 UTC m=+15.825992245" lastFinishedPulling="2024-07-02 00:43:29.882927256 +0000 UTC m=+22.905295784" observedRunningTime="2024-07-02 00:43:30.18324349 +0000 UTC m=+23.205612018" watchObservedRunningTime="2024-07-02 00:43:31.168992657 +0000 UTC m=+24.191361185" Jul 2 00:43:31.181374 systemd[1]: Started cri-containerd-bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670.scope. Jul 2 00:43:31.221258 env[1216]: time="2024-07-02T00:43:31.221209086Z" level=info msg="StartContainer for \"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670\" returns successfully" Jul 2 00:43:31.221818 systemd[1]: cri-containerd-bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670.scope: Deactivated successfully. Jul 2 00:43:31.242233 env[1216]: time="2024-07-02T00:43:31.242183098Z" level=info msg="shim disconnected" id=bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670 Jul 2 00:43:31.242452 env[1216]: time="2024-07-02T00:43:31.242237068Z" level=warning msg="cleaning up after shim disconnected" id=bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670 namespace=k8s.io Jul 2 00:43:31.242452 env[1216]: time="2024-07-02T00:43:31.242246670Z" level=info msg="cleaning up dead shim" Jul 2 00:43:31.248312 env[1216]: time="2024-07-02T00:43:31.248279824Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:43:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2675 runtime=io.containerd.runc.v2\n" Jul 2 00:43:31.335686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670-rootfs.mount: Deactivated successfully. Jul 2 00:43:32.154614 kubelet[2025]: E0702 00:43:32.154568 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:32.157428 env[1216]: time="2024-07-02T00:43:32.157390017Z" level=info msg="CreateContainer within sandbox \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:43:32.170893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157981657.mount: Deactivated successfully. Jul 2 00:43:32.176791 env[1216]: time="2024-07-02T00:43:32.176735926Z" level=info msg="CreateContainer within sandbox \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\"" Jul 2 00:43:32.177538 env[1216]: time="2024-07-02T00:43:32.177497985Z" level=info msg="StartContainer for \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\"" Jul 2 00:43:32.209163 systemd[1]: Started cri-containerd-8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32.scope. Jul 2 00:43:32.260839 env[1216]: time="2024-07-02T00:43:32.257172239Z" level=info msg="StartContainer for \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\" returns successfully" Jul 2 00:43:32.429804 kubelet[2025]: I0702 00:43:32.429192 2025 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:43:32.459486 kubelet[2025]: I0702 00:43:32.459439 2025 topology_manager.go:215] "Topology Admit Handler" podUID="38e5526f-7c5d-4efb-a2f7-cb91c54a6932" podNamespace="kube-system" podName="coredns-76f75df574-h8jc4" Jul 2 00:43:32.459652 kubelet[2025]: I0702 00:43:32.459632 2025 topology_manager.go:215] "Topology Admit Handler" podUID="53bf847a-41a9-43a2-b84f-cbd8ec4ede86" podNamespace="kube-system" podName="coredns-76f75df574-fl7n8" Jul 2 00:43:32.464600 systemd[1]: Created slice kubepods-burstable-pod38e5526f_7c5d_4efb_a2f7_cb91c54a6932.slice. Jul 2 00:43:32.470349 systemd[1]: Created slice kubepods-burstable-pod53bf847a_41a9_43a2_b84f_cbd8ec4ede86.slice. Jul 2 00:43:32.554552 kubelet[2025]: I0702 00:43:32.554504 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38e5526f-7c5d-4efb-a2f7-cb91c54a6932-config-volume\") pod \"coredns-76f75df574-h8jc4\" (UID: \"38e5526f-7c5d-4efb-a2f7-cb91c54a6932\") " pod="kube-system/coredns-76f75df574-h8jc4" Jul 2 00:43:32.554703 kubelet[2025]: I0702 00:43:32.554606 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkjn7\" (UniqueName: \"kubernetes.io/projected/38e5526f-7c5d-4efb-a2f7-cb91c54a6932-kube-api-access-pkjn7\") pod \"coredns-76f75df574-h8jc4\" (UID: \"38e5526f-7c5d-4efb-a2f7-cb91c54a6932\") " pod="kube-system/coredns-76f75df574-h8jc4" Jul 2 00:43:32.554703 kubelet[2025]: I0702 00:43:32.554646 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5kpq\" (UniqueName: \"kubernetes.io/projected/53bf847a-41a9-43a2-b84f-cbd8ec4ede86-kube-api-access-f5kpq\") pod \"coredns-76f75df574-fl7n8\" (UID: \"53bf847a-41a9-43a2-b84f-cbd8ec4ede86\") " pod="kube-system/coredns-76f75df574-fl7n8" Jul 2 00:43:32.554771 kubelet[2025]: I0702 00:43:32.554672 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53bf847a-41a9-43a2-b84f-cbd8ec4ede86-config-volume\") pod \"coredns-76f75df574-fl7n8\" (UID: \"53bf847a-41a9-43a2-b84f-cbd8ec4ede86\") " pod="kube-system/coredns-76f75df574-fl7n8" Jul 2 00:43:32.671152 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:43:32.768391 kubelet[2025]: E0702 00:43:32.768357 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:32.769064 env[1216]: time="2024-07-02T00:43:32.769009320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h8jc4,Uid:38e5526f-7c5d-4efb-a2f7-cb91c54a6932,Namespace:kube-system,Attempt:0,}" Jul 2 00:43:32.772621 kubelet[2025]: E0702 00:43:32.772599 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:32.773197 env[1216]: time="2024-07-02T00:43:32.773157641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fl7n8,Uid:53bf847a-41a9-43a2-b84f-cbd8ec4ede86,Namespace:kube-system,Attempt:0,}" Jul 2 00:43:32.941161 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:43:33.159182 kubelet[2025]: E0702 00:43:33.159068 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:33.172773 kubelet[2025]: I0702 00:43:33.172726 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-w6jpl" podStartSLOduration=6.211279801 podStartE2EDuration="12.172686695s" podCreationTimestamp="2024-07-02 00:43:21 +0000 UTC" firstStartedPulling="2024-07-02 00:43:22.360206429 +0000 UTC m=+15.382574957" lastFinishedPulling="2024-07-02 00:43:28.321613363 +0000 UTC m=+21.343981851" observedRunningTime="2024-07-02 00:43:33.172425409 +0000 UTC m=+26.194793937" watchObservedRunningTime="2024-07-02 00:43:33.172686695 +0000 UTC m=+26.195055223" Jul 2 00:43:34.160905 kubelet[2025]: E0702 00:43:34.160865 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:34.440250 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:50178.service. Jul 2 00:43:34.493075 sshd[2855]: Accepted publickey for core from 10.0.0.1 port 50178 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:34.494792 sshd[2855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:34.498524 systemd-logind[1205]: New session 6 of user core. Jul 2 00:43:34.499009 systemd[1]: Started session-6.scope. Jul 2 00:43:34.565255 systemd-networkd[1041]: cilium_host: Link UP Jul 2 00:43:34.565470 systemd-networkd[1041]: cilium_net: Link UP Jul 2 00:43:34.565626 systemd-networkd[1041]: cilium_net: Gained carrier Jul 2 00:43:34.566189 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 00:43:34.566230 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 00:43:34.566064 systemd-networkd[1041]: cilium_host: Gained carrier Jul 2 00:43:34.650949 sshd[2855]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:34.653473 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:50178.service: Deactivated successfully. Jul 2 00:43:34.654253 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:43:34.654797 systemd-logind[1205]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:43:34.655734 systemd-logind[1205]: Removed session 6. Jul 2 00:43:34.673535 systemd-networkd[1041]: cilium_vxlan: Link UP Jul 2 00:43:34.673542 systemd-networkd[1041]: cilium_vxlan: Gained carrier Jul 2 00:43:34.782244 systemd-networkd[1041]: cilium_host: Gained IPv6LL Jul 2 00:43:34.854261 systemd-networkd[1041]: cilium_net: Gained IPv6LL Jul 2 00:43:34.998164 kernel: NET: Registered PF_ALG protocol family Jul 2 00:43:35.162587 kubelet[2025]: E0702 00:43:35.162491 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:35.563883 systemd-networkd[1041]: lxc_health: Link UP Jul 2 00:43:35.582235 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:43:35.582611 systemd-networkd[1041]: lxc_health: Gained carrier Jul 2 00:43:35.854165 systemd-networkd[1041]: lxce14eae7993a6: Link UP Jul 2 00:43:35.856369 systemd-networkd[1041]: lxc308982aee4e3: Link UP Jul 2 00:43:35.874165 kernel: eth0: renamed from tmp3145d Jul 2 00:43:35.878153 kernel: eth0: renamed from tmp0bfe3 Jul 2 00:43:35.886147 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce14eae7993a6: link becomes ready Jul 2 00:43:35.886221 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc308982aee4e3: link becomes ready Jul 2 00:43:35.886282 systemd-networkd[1041]: lxce14eae7993a6: Gained carrier Jul 2 00:43:35.886416 systemd-networkd[1041]: lxc308982aee4e3: Gained carrier Jul 2 00:43:36.277254 kubelet[2025]: E0702 00:43:36.277224 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:36.436338 systemd-networkd[1041]: cilium_vxlan: Gained IPv6LL Jul 2 00:43:36.814260 systemd-networkd[1041]: lxc_health: Gained IPv6LL Jul 2 00:43:37.165569 kubelet[2025]: E0702 00:43:37.165458 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:37.326300 systemd-networkd[1041]: lxc308982aee4e3: Gained IPv6LL Jul 2 00:43:37.647321 systemd-networkd[1041]: lxce14eae7993a6: Gained IPv6LL Jul 2 00:43:38.166787 kubelet[2025]: E0702 00:43:38.166738 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:39.414791 env[1216]: time="2024-07-02T00:43:39.414686139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:43:39.414791 env[1216]: time="2024-07-02T00:43:39.414730226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:43:39.414791 env[1216]: time="2024-07-02T00:43:39.414740987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:43:39.415435 env[1216]: time="2024-07-02T00:43:39.415377476Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bfe374a15b0a6f850a3b0ad581257c674a9dfa279a0de320246e923bd2e4749 pid=3267 runtime=io.containerd.runc.v2 Jul 2 00:43:39.420531 env[1216]: time="2024-07-02T00:43:39.420464031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:43:39.420621 env[1216]: time="2024-07-02T00:43:39.420547083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:43:39.420621 env[1216]: time="2024-07-02T00:43:39.420582488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:43:39.420851 env[1216]: time="2024-07-02T00:43:39.420817441Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3145da59230ed1d3a7c5d2bda0c91fe0474354b9c68ea96d1958c0edb36ffeae pid=3285 runtime=io.containerd.runc.v2 Jul 2 00:43:39.430729 systemd[1]: Started cri-containerd-0bfe374a15b0a6f850a3b0ad581257c674a9dfa279a0de320246e923bd2e4749.scope. Jul 2 00:43:39.442430 systemd[1]: Started cri-containerd-3145da59230ed1d3a7c5d2bda0c91fe0474354b9c68ea96d1958c0edb36ffeae.scope. Jul 2 00:43:39.489121 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:43:39.492354 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:43:39.506454 env[1216]: time="2024-07-02T00:43:39.506412583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fl7n8,Uid:53bf847a-41a9-43a2-b84f-cbd8ec4ede86,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bfe374a15b0a6f850a3b0ad581257c674a9dfa279a0de320246e923bd2e4749\"" Jul 2 00:43:39.507025 kubelet[2025]: E0702 00:43:39.506995 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:39.514364 env[1216]: time="2024-07-02T00:43:39.514321493Z" level=info msg="CreateContainer within sandbox \"0bfe374a15b0a6f850a3b0ad581257c674a9dfa279a0de320246e923bd2e4749\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:43:39.516729 env[1216]: time="2024-07-02T00:43:39.516696667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h8jc4,Uid:38e5526f-7c5d-4efb-a2f7-cb91c54a6932,Namespace:kube-system,Attempt:0,} returns sandbox id \"3145da59230ed1d3a7c5d2bda0c91fe0474354b9c68ea96d1958c0edb36ffeae\"" Jul 2 00:43:39.517856 kubelet[2025]: E0702 00:43:39.517709 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:39.519957 env[1216]: time="2024-07-02T00:43:39.519926161Z" level=info msg="CreateContainer within sandbox \"3145da59230ed1d3a7c5d2bda0c91fe0474354b9c68ea96d1958c0edb36ffeae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:43:39.582086 env[1216]: time="2024-07-02T00:43:39.582035964Z" level=info msg="CreateContainer within sandbox \"0bfe374a15b0a6f850a3b0ad581257c674a9dfa279a0de320246e923bd2e4749\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b9c809f2b503ae0c6acf591296904a463eedd290438c47bd2dd7952b32cc952\"" Jul 2 00:43:39.582641 env[1216]: time="2024-07-02T00:43:39.582612405Z" level=info msg="CreateContainer within sandbox \"3145da59230ed1d3a7c5d2bda0c91fe0474354b9c68ea96d1958c0edb36ffeae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ccd9e4ac54da272554d40d9606e28b2330561e248adcb291e00f7fb4a1648f6a\"" Jul 2 00:43:39.582874 env[1216]: time="2024-07-02T00:43:39.582660812Z" level=info msg="StartContainer for \"3b9c809f2b503ae0c6acf591296904a463eedd290438c47bd2dd7952b32cc952\"" Jul 2 00:43:39.584551 env[1216]: time="2024-07-02T00:43:39.584520433Z" level=info msg="StartContainer for \"ccd9e4ac54da272554d40d9606e28b2330561e248adcb291e00f7fb4a1648f6a\"" Jul 2 00:43:39.601305 systemd[1]: Started cri-containerd-3b9c809f2b503ae0c6acf591296904a463eedd290438c47bd2dd7952b32cc952.scope. Jul 2 00:43:39.614575 systemd[1]: Started cri-containerd-ccd9e4ac54da272554d40d9606e28b2330561e248adcb291e00f7fb4a1648f6a.scope. Jul 2 00:43:39.651961 env[1216]: time="2024-07-02T00:43:39.651669744Z" level=info msg="StartContainer for \"ccd9e4ac54da272554d40d9606e28b2330561e248adcb291e00f7fb4a1648f6a\" returns successfully" Jul 2 00:43:39.657780 env[1216]: time="2024-07-02T00:43:39.652788262Z" level=info msg="StartContainer for \"3b9c809f2b503ae0c6acf591296904a463eedd290438c47bd2dd7952b32cc952\" returns successfully" Jul 2 00:43:39.656396 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:50190.service. Jul 2 00:43:39.704020 sshd[3412]: Accepted publickey for core from 10.0.0.1 port 50190 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:39.705674 sshd[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:39.710211 systemd-logind[1205]: New session 7 of user core. Jul 2 00:43:39.710553 systemd[1]: Started session-7.scope. Jul 2 00:43:39.850292 sshd[3412]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:39.852657 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:50190.service: Deactivated successfully. Jul 2 00:43:39.853391 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:43:39.853869 systemd-logind[1205]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:43:39.854540 systemd-logind[1205]: Removed session 7. Jul 2 00:43:40.171785 kubelet[2025]: E0702 00:43:40.171733 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:40.173286 kubelet[2025]: E0702 00:43:40.173252 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:40.182324 kubelet[2025]: I0702 00:43:40.182289 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fl7n8" podStartSLOduration=18.182255616 podStartE2EDuration="18.182255616s" podCreationTimestamp="2024-07-02 00:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:43:40.181260281 +0000 UTC m=+33.203628809" watchObservedRunningTime="2024-07-02 00:43:40.182255616 +0000 UTC m=+33.204624104" Jul 2 00:43:40.202026 kubelet[2025]: I0702 00:43:40.201985 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-h8jc4" podStartSLOduration=18.201946769 podStartE2EDuration="18.201946769s" podCreationTimestamp="2024-07-02 00:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:43:40.200378316 +0000 UTC m=+33.222746844" watchObservedRunningTime="2024-07-02 00:43:40.201946769 +0000 UTC m=+33.224315297" Jul 2 00:43:40.419598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207053198.mount: Deactivated successfully. Jul 2 00:43:41.175183 kubelet[2025]: E0702 00:43:41.175148 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:41.175681 kubelet[2025]: E0702 00:43:41.175642 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:42.176706 kubelet[2025]: E0702 00:43:42.176671 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:42.176706 kubelet[2025]: E0702 00:43:42.176710 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:43:44.870988 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:51342.service. Jul 2 00:43:44.915012 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 51342 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:44.916417 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:44.923394 systemd-logind[1205]: New session 8 of user core. Jul 2 00:43:44.923507 systemd[1]: Started session-8.scope. Jul 2 00:43:45.050594 sshd[3444]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:45.053626 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:43:45.054215 systemd-logind[1205]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:43:45.054359 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:51342.service: Deactivated successfully. Jul 2 00:43:45.055413 systemd-logind[1205]: Removed session 8. Jul 2 00:43:50.054290 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:51356.service. Jul 2 00:43:50.100362 sshd[3460]: Accepted publickey for core from 10.0.0.1 port 51356 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:50.101795 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:50.105748 systemd-logind[1205]: New session 9 of user core. Jul 2 00:43:50.106640 systemd[1]: Started session-9.scope. Jul 2 00:43:50.219391 sshd[3460]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:50.226853 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:51356.service: Deactivated successfully. Jul 2 00:43:50.227521 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:43:50.228095 systemd-logind[1205]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:43:50.229263 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:51360.service. Jul 2 00:43:50.229928 systemd-logind[1205]: Removed session 9. Jul 2 00:43:50.270433 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 51360 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:50.272051 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:50.275602 systemd-logind[1205]: New session 10 of user core. Jul 2 00:43:50.276422 systemd[1]: Started session-10.scope. Jul 2 00:43:50.436819 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:43214.service. Jul 2 00:43:50.438524 sshd[3474]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:50.453055 systemd-logind[1205]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:43:50.453321 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:51360.service: Deactivated successfully. Jul 2 00:43:50.454071 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:43:50.455414 systemd-logind[1205]: Removed session 10. Jul 2 00:43:50.485625 sshd[3484]: Accepted publickey for core from 10.0.0.1 port 43214 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:50.487280 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:50.492197 systemd-logind[1205]: New session 11 of user core. Jul 2 00:43:50.492569 systemd[1]: Started session-11.scope. Jul 2 00:43:50.609970 sshd[3484]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:50.612296 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:43214.service: Deactivated successfully. Jul 2 00:43:50.613078 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:43:50.613721 systemd-logind[1205]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:43:50.614481 systemd-logind[1205]: Removed session 11. Jul 2 00:43:55.615436 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:43228.service. Jul 2 00:43:55.658501 sshd[3502]: Accepted publickey for core from 10.0.0.1 port 43228 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:43:55.660114 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:55.664196 systemd-logind[1205]: New session 12 of user core. Jul 2 00:43:55.664291 systemd[1]: Started session-12.scope. Jul 2 00:43:55.781965 sshd[3502]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:55.784391 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:43228.service: Deactivated successfully. Jul 2 00:43:55.785194 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:43:55.787117 systemd-logind[1205]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:43:55.788509 systemd-logind[1205]: Removed session 12. Jul 2 00:44:00.787542 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:45680.service. Jul 2 00:44:00.828773 sshd[3516]: Accepted publickey for core from 10.0.0.1 port 45680 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:00.830052 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:00.833624 systemd-logind[1205]: New session 13 of user core. Jul 2 00:44:00.834500 systemd[1]: Started session-13.scope. Jul 2 00:44:00.942178 sshd[3516]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:00.945301 systemd-logind[1205]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:44:00.946496 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:45696.service. Jul 2 00:44:00.947054 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:45680.service: Deactivated successfully. Jul 2 00:44:00.947918 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:44:00.948702 systemd-logind[1205]: Removed session 13. Jul 2 00:44:00.988978 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 45696 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:00.990691 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:00.995033 systemd-logind[1205]: New session 14 of user core. Jul 2 00:44:00.995538 systemd[1]: Started session-14.scope. Jul 2 00:44:01.206188 sshd[3528]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:01.209746 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:45696.service: Deactivated successfully. Jul 2 00:44:01.210476 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:44:01.211100 systemd-logind[1205]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:44:01.212295 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:45704.service. Jul 2 00:44:01.213422 systemd-logind[1205]: Removed session 14. Jul 2 00:44:01.257921 sshd[3541]: Accepted publickey for core from 10.0.0.1 port 45704 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:01.259579 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:01.262929 systemd-logind[1205]: New session 15 of user core. Jul 2 00:44:01.263783 systemd[1]: Started session-15.scope. Jul 2 00:44:02.484796 sshd[3541]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:02.487530 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:45716.service. Jul 2 00:44:02.488112 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:45704.service: Deactivated successfully. Jul 2 00:44:02.488914 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:44:02.490742 systemd-logind[1205]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:44:02.491594 systemd-logind[1205]: Removed session 15. Jul 2 00:44:02.532108 sshd[3559]: Accepted publickey for core from 10.0.0.1 port 45716 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:02.533435 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:02.537537 systemd-logind[1205]: New session 16 of user core. Jul 2 00:44:02.537698 systemd[1]: Started session-16.scope. Jul 2 00:44:02.756947 sshd[3559]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:02.759370 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:45730.service. Jul 2 00:44:02.765409 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:45716.service: Deactivated successfully. Jul 2 00:44:02.766086 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:44:02.767035 systemd-logind[1205]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:44:02.768627 systemd-logind[1205]: Removed session 16. Jul 2 00:44:02.801464 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 45730 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:02.802808 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:02.806775 systemd-logind[1205]: New session 17 of user core. Jul 2 00:44:02.807012 systemd[1]: Started session-17.scope. Jul 2 00:44:02.917358 sshd[3571]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:02.919784 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:45730.service: Deactivated successfully. Jul 2 00:44:02.920522 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:44:02.921001 systemd-logind[1205]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:44:02.921680 systemd-logind[1205]: Removed session 17. Jul 2 00:44:07.925466 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:45746.service. Jul 2 00:44:07.970060 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 45746 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:07.971529 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:07.977068 systemd-logind[1205]: New session 18 of user core. Jul 2 00:44:07.977564 systemd[1]: Started session-18.scope. Jul 2 00:44:08.090437 sshd[3589]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:08.092778 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:45746.service: Deactivated successfully. Jul 2 00:44:08.093502 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:44:08.094111 systemd-logind[1205]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:44:08.094898 systemd-logind[1205]: Removed session 18. Jul 2 00:44:13.094757 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:58808.service. Jul 2 00:44:13.141293 sshd[3605]: Accepted publickey for core from 10.0.0.1 port 58808 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:13.142465 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:13.146568 systemd-logind[1205]: New session 19 of user core. Jul 2 00:44:13.147289 systemd[1]: Started session-19.scope. Jul 2 00:44:13.256850 sshd[3605]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:13.259452 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:58808.service: Deactivated successfully. Jul 2 00:44:13.260175 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:44:13.260697 systemd-logind[1205]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:44:13.261499 systemd-logind[1205]: Removed session 19. Jul 2 00:44:17.090047 kubelet[2025]: E0702 00:44:17.088216 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:18.261476 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:58810.service. Jul 2 00:44:18.316943 sshd[3618]: Accepted publickey for core from 10.0.0.1 port 58810 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:18.318729 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:18.324561 systemd-logind[1205]: New session 20 of user core. Jul 2 00:44:18.325059 systemd[1]: Started session-20.scope. Jul 2 00:44:18.447474 sshd[3618]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:18.450288 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:58810.service: Deactivated successfully. Jul 2 00:44:18.451105 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:44:18.451706 systemd-logind[1205]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:44:18.452561 systemd-logind[1205]: Removed session 20. Jul 2 00:44:23.451277 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:47632.service. Jul 2 00:44:23.494886 sshd[3634]: Accepted publickey for core from 10.0.0.1 port 47632 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:23.496668 sshd[3634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:23.500007 systemd-logind[1205]: New session 21 of user core. Jul 2 00:44:23.500851 systemd[1]: Started session-21.scope. Jul 2 00:44:23.610158 sshd[3634]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:23.613932 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:47644.service. Jul 2 00:44:23.614471 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:47632.service: Deactivated successfully. Jul 2 00:44:23.615250 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:44:23.615771 systemd-logind[1205]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:44:23.616707 systemd-logind[1205]: Removed session 21. Jul 2 00:44:23.657486 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 47644 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:23.658693 sshd[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:23.662075 systemd-logind[1205]: New session 22 of user core. Jul 2 00:44:23.662954 systemd[1]: Started session-22.scope. Jul 2 00:44:25.896522 env[1216]: time="2024-07-02T00:44:25.896478149Z" level=info msg="StopContainer for \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\" with timeout 30 (s)" Jul 2 00:44:25.897035 env[1216]: time="2024-07-02T00:44:25.897013125Z" level=info msg="Stop container \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\" with signal terminated" Jul 2 00:44:25.910632 systemd[1]: cri-containerd-e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206.scope: Deactivated successfully. Jul 2 00:44:25.927661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206-rootfs.mount: Deactivated successfully. Jul 2 00:44:25.936405 env[1216]: time="2024-07-02T00:44:25.936357723Z" level=info msg="shim disconnected" id=e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206 Jul 2 00:44:25.936405 env[1216]: time="2024-07-02T00:44:25.936403401Z" level=warning msg="cleaning up after shim disconnected" id=e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206 namespace=k8s.io Jul 2 00:44:25.936707 env[1216]: time="2024-07-02T00:44:25.936412560Z" level=info msg="cleaning up dead shim" Jul 2 00:44:25.941169 env[1216]: time="2024-07-02T00:44:25.941092151Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:44:25.945011 env[1216]: time="2024-07-02T00:44:25.944966217Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3690 runtime=io.containerd.runc.v2\n" Jul 2 00:44:25.946776 env[1216]: time="2024-07-02T00:44:25.946743697Z" level=info msg="StopContainer for \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\" with timeout 2 (s)" Jul 2 00:44:25.947027 env[1216]: time="2024-07-02T00:44:25.947002046Z" level=info msg="Stop container \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\" with signal terminated" Jul 2 00:44:25.947167 env[1216]: time="2024-07-02T00:44:25.947143160Z" level=info msg="StopContainer for \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\" returns successfully" Jul 2 00:44:25.947553 env[1216]: time="2024-07-02T00:44:25.947523663Z" level=info msg="StopPodSandbox for \"ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a\"" Jul 2 00:44:25.947610 env[1216]: time="2024-07-02T00:44:25.947589140Z" level=info msg="Container to stop \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:25.949315 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a-shm.mount: Deactivated successfully. Jul 2 00:44:25.955323 systemd-networkd[1041]: lxc_health: Link DOWN Jul 2 00:44:25.955331 systemd-networkd[1041]: lxc_health: Lost carrier Jul 2 00:44:25.955952 systemd[1]: cri-containerd-ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a.scope: Deactivated successfully. Jul 2 00:44:25.981224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a-rootfs.mount: Deactivated successfully. Jul 2 00:44:25.981861 systemd[1]: cri-containerd-8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32.scope: Deactivated successfully. Jul 2 00:44:25.982179 systemd[1]: cri-containerd-8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32.scope: Consumed 6.734s CPU time. Jul 2 00:44:25.993631 env[1216]: time="2024-07-02T00:44:25.993576920Z" level=info msg="shim disconnected" id=ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a Jul 2 00:44:25.993631 env[1216]: time="2024-07-02T00:44:25.993622478Z" level=warning msg="cleaning up after shim disconnected" id=ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a namespace=k8s.io Jul 2 00:44:25.993631 env[1216]: time="2024-07-02T00:44:25.993637037Z" level=info msg="cleaning up dead shim" Jul 2 00:44:25.999560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32-rootfs.mount: Deactivated successfully. Jul 2 00:44:26.002993 env[1216]: time="2024-07-02T00:44:26.002950465Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3744 runtime=io.containerd.runc.v2\n" Jul 2 00:44:26.003388 env[1216]: time="2024-07-02T00:44:26.003333369Z" level=info msg="TearDown network for sandbox \"ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a\" successfully" Jul 2 00:44:26.003443 env[1216]: time="2024-07-02T00:44:26.003385607Z" level=info msg="StopPodSandbox for \"ce7f0ecdacacc20da4f804916d4925a0a2630de482b29ea537db893e0daebd2a\" returns successfully" Jul 2 00:44:26.004331 env[1216]: time="2024-07-02T00:44:26.004298489Z" level=info msg="shim disconnected" id=8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32 Jul 2 00:44:26.004402 env[1216]: time="2024-07-02T00:44:26.004333647Z" level=warning msg="cleaning up after shim disconnected" id=8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32 namespace=k8s.io Jul 2 00:44:26.004402 env[1216]: time="2024-07-02T00:44:26.004342727Z" level=info msg="cleaning up dead shim" Jul 2 00:44:26.017700 env[1216]: time="2024-07-02T00:44:26.017647889Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3757 runtime=io.containerd.runc.v2\n" Jul 2 00:44:26.019451 env[1216]: time="2024-07-02T00:44:26.019408416Z" level=info msg="StopContainer for \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\" returns successfully" Jul 2 00:44:26.019820 env[1216]: time="2024-07-02T00:44:26.019798719Z" level=info msg="StopPodSandbox for \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\"" Jul 2 00:44:26.019873 env[1216]: time="2024-07-02T00:44:26.019856597Z" level=info msg="Container to stop \"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:26.019904 env[1216]: time="2024-07-02T00:44:26.019871756Z" level=info msg="Container to stop \"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:26.019904 env[1216]: time="2024-07-02T00:44:26.019883436Z" level=info msg="Container to stop \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:26.019904 env[1216]: time="2024-07-02T00:44:26.019894795Z" level=info msg="Container to stop \"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:26.020048 env[1216]: time="2024-07-02T00:44:26.019905155Z" level=info msg="Container to stop \"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:44:26.024913 systemd[1]: cri-containerd-685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf.scope: Deactivated successfully. Jul 2 00:44:26.046805 env[1216]: time="2024-07-02T00:44:26.046749630Z" level=info msg="shim disconnected" id=685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf Jul 2 00:44:26.046805 env[1216]: time="2024-07-02T00:44:26.046804588Z" level=warning msg="cleaning up after shim disconnected" id=685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf namespace=k8s.io Jul 2 00:44:26.047002 env[1216]: time="2024-07-02T00:44:26.046814788Z" level=info msg="cleaning up dead shim" Jul 2 00:44:26.055699 env[1216]: time="2024-07-02T00:44:26.055646178Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3788 runtime=io.containerd.runc.v2\n" Jul 2 00:44:26.056294 env[1216]: time="2024-07-02T00:44:26.056259072Z" level=info msg="TearDown network for sandbox \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" successfully" Jul 2 00:44:26.056294 env[1216]: time="2024-07-02T00:44:26.056290831Z" level=info msg="StopPodSandbox for \"685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf\" returns successfully" Jul 2 00:44:26.079037 kubelet[2025]: I0702 00:44:26.079005 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-cgroup\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079465 kubelet[2025]: I0702 00:44:26.079053 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/123230f8-b95c-43f9-ae22-20cd9dde7043-cilium-config-path\") pod \"123230f8-b95c-43f9-ae22-20cd9dde7043\" (UID: \"123230f8-b95c-43f9-ae22-20cd9dde7043\") " Jul 2 00:44:26.079465 kubelet[2025]: I0702 00:44:26.079079 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktpjl\" (UniqueName: \"kubernetes.io/projected/e12c83a6-d6f9-417f-83b7-fef0196df593-kube-api-access-ktpjl\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079465 kubelet[2025]: I0702 00:44:26.079100 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-host-proc-sys-kernel\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079465 kubelet[2025]: I0702 00:44:26.079182 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdhbm\" (UniqueName: \"kubernetes.io/projected/123230f8-b95c-43f9-ae22-20cd9dde7043-kube-api-access-mdhbm\") pod \"123230f8-b95c-43f9-ae22-20cd9dde7043\" (UID: \"123230f8-b95c-43f9-ae22-20cd9dde7043\") " Jul 2 00:44:26.079465 kubelet[2025]: I0702 00:44:26.079205 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-xtables-lock\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079465 kubelet[2025]: I0702 00:44:26.079222 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-host-proc-sys-net\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079620 kubelet[2025]: I0702 00:44:26.079240 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cni-path\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079620 kubelet[2025]: I0702 00:44:26.079259 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e12c83a6-d6f9-417f-83b7-fef0196df593-hubble-tls\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079620 kubelet[2025]: I0702 00:44:26.079278 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-lib-modules\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079620 kubelet[2025]: I0702 00:44:26.079317 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-etc-cni-netd\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079620 kubelet[2025]: I0702 00:44:26.079335 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-hostproc\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079620 kubelet[2025]: I0702 00:44:26.079355 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-run\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079763 kubelet[2025]: I0702 00:44:26.079372 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-bpf-maps\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079763 kubelet[2025]: I0702 00:44:26.079392 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e12c83a6-d6f9-417f-83b7-fef0196df593-clustermesh-secrets\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.079763 kubelet[2025]: I0702 00:44:26.079419 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-config-path\") pod \"e12c83a6-d6f9-417f-83b7-fef0196df593\" (UID: \"e12c83a6-d6f9-417f-83b7-fef0196df593\") " Jul 2 00:44:26.085768 kubelet[2025]: I0702 00:44:26.085720 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.085768 kubelet[2025]: I0702 00:44:26.085727 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.087423 kubelet[2025]: I0702 00:44:26.087369 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/123230f8-b95c-43f9-ae22-20cd9dde7043-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "123230f8-b95c-43f9-ae22-20cd9dde7043" (UID: "123230f8-b95c-43f9-ae22-20cd9dde7043"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:44:26.087514 kubelet[2025]: I0702 00:44:26.087465 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.087514 kubelet[2025]: I0702 00:44:26.087489 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.087514 kubelet[2025]: I0702 00:44:26.087506 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.087592 kubelet[2025]: I0702 00:44:26.087523 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cni-path" (OuterVolumeSpecName: "cni-path") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.087818 kubelet[2025]: I0702 00:44:26.087786 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:44:26.087869 kubelet[2025]: I0702 00:44:26.087838 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.087869 kubelet[2025]: I0702 00:44:26.087859 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.087923 kubelet[2025]: I0702 00:44:26.087876 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-hostproc" (OuterVolumeSpecName: "hostproc") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.087923 kubelet[2025]: I0702 00:44:26.087892 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:26.089297 kubelet[2025]: I0702 00:44:26.089269 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123230f8-b95c-43f9-ae22-20cd9dde7043-kube-api-access-mdhbm" (OuterVolumeSpecName: "kube-api-access-mdhbm") pod "123230f8-b95c-43f9-ae22-20cd9dde7043" (UID: "123230f8-b95c-43f9-ae22-20cd9dde7043"). InnerVolumeSpecName "kube-api-access-mdhbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:44:26.090975 kubelet[2025]: I0702 00:44:26.090936 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12c83a6-d6f9-417f-83b7-fef0196df593-kube-api-access-ktpjl" (OuterVolumeSpecName: "kube-api-access-ktpjl") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "kube-api-access-ktpjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:44:26.092590 kubelet[2025]: I0702 00:44:26.092520 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12c83a6-d6f9-417f-83b7-fef0196df593-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:44:26.092937 kubelet[2025]: I0702 00:44:26.092850 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12c83a6-d6f9-417f-83b7-fef0196df593-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e12c83a6-d6f9-417f-83b7-fef0196df593" (UID: "e12c83a6-d6f9-417f-83b7-fef0196df593"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:44:26.180330 kubelet[2025]: I0702 00:44:26.180217 2025 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ktpjl\" (UniqueName: \"kubernetes.io/projected/e12c83a6-d6f9-417f-83b7-fef0196df593-kube-api-access-ktpjl\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180330 kubelet[2025]: I0702 00:44:26.180254 2025 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180330 kubelet[2025]: I0702 00:44:26.180266 2025 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mdhbm\" (UniqueName: \"kubernetes.io/projected/123230f8-b95c-43f9-ae22-20cd9dde7043-kube-api-access-mdhbm\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180330 kubelet[2025]: I0702 00:44:26.180279 2025 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180330 kubelet[2025]: I0702 00:44:26.180288 2025 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180330 kubelet[2025]: I0702 00:44:26.180299 2025 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180330 kubelet[2025]: I0702 00:44:26.180309 2025 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e12c83a6-d6f9-417f-83b7-fef0196df593-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180330 kubelet[2025]: I0702 00:44:26.180318 2025 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180630 kubelet[2025]: I0702 00:44:26.180327 2025 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180630 kubelet[2025]: I0702 00:44:26.180337 2025 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180630 kubelet[2025]: I0702 00:44:26.180346 2025 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180630 kubelet[2025]: I0702 00:44:26.180356 2025 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180630 kubelet[2025]: I0702 00:44:26.180364 2025 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180630 kubelet[2025]: I0702 00:44:26.180377 2025 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e12c83a6-d6f9-417f-83b7-fef0196df593-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180630 kubelet[2025]: I0702 00:44:26.180386 2025 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e12c83a6-d6f9-417f-83b7-fef0196df593-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.180630 kubelet[2025]: I0702 00:44:26.180396 2025 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/123230f8-b95c-43f9-ae22-20cd9dde7043-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:26.262044 kubelet[2025]: I0702 00:44:26.262010 2025 scope.go:117] "RemoveContainer" containerID="e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206" Jul 2 00:44:26.263250 env[1216]: time="2024-07-02T00:44:26.263208083Z" level=info msg="RemoveContainer for \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\"" Jul 2 00:44:26.271555 systemd[1]: Removed slice kubepods-burstable-pode12c83a6_d6f9_417f_83b7_fef0196df593.slice. Jul 2 00:44:26.271658 systemd[1]: kubepods-burstable-pode12c83a6_d6f9_417f_83b7_fef0196df593.slice: Consumed 6.964s CPU time. Jul 2 00:44:26.272565 systemd[1]: Removed slice kubepods-besteffort-pod123230f8_b95c_43f9_ae22_20cd9dde7043.slice. Jul 2 00:44:26.274786 env[1216]: time="2024-07-02T00:44:26.274741920Z" level=info msg="RemoveContainer for \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\" returns successfully" Jul 2 00:44:26.275186 kubelet[2025]: I0702 00:44:26.275097 2025 scope.go:117] "RemoveContainer" containerID="e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206" Jul 2 00:44:26.276109 env[1216]: time="2024-07-02T00:44:26.275359734Z" level=error msg="ContainerStatus for \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\": not found" Jul 2 00:44:26.276237 kubelet[2025]: E0702 00:44:26.276212 2025 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\": not found" containerID="e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206" Jul 2 00:44:26.277613 kubelet[2025]: I0702 00:44:26.276802 2025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206"} err="failed to get container status \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3e73274ea6437a1181a23c5f0588a0a72d5373df560964f5dc20494cb357206\": not found" Jul 2 00:44:26.277613 kubelet[2025]: I0702 00:44:26.276838 2025 scope.go:117] "RemoveContainer" containerID="8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32" Jul 2 00:44:26.278816 env[1216]: time="2024-07-02T00:44:26.277882948Z" level=info msg="RemoveContainer for \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\"" Jul 2 00:44:26.282653 env[1216]: time="2024-07-02T00:44:26.281772785Z" level=info msg="RemoveContainer for \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\" returns successfully" Jul 2 00:44:26.282747 kubelet[2025]: I0702 00:44:26.282076 2025 scope.go:117] "RemoveContainer" containerID="bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670" Jul 2 00:44:26.285412 env[1216]: time="2024-07-02T00:44:26.285316197Z" level=info msg="RemoveContainer for \"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670\"" Jul 2 00:44:26.292482 env[1216]: time="2024-07-02T00:44:26.292438219Z" level=info msg="RemoveContainer for \"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670\" returns successfully" Jul 2 00:44:26.292699 kubelet[2025]: I0702 00:44:26.292660 2025 scope.go:117] "RemoveContainer" containerID="3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a" Jul 2 00:44:26.293897 env[1216]: time="2024-07-02T00:44:26.293857359Z" level=info msg="RemoveContainer for \"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a\"" Jul 2 00:44:26.296343 env[1216]: time="2024-07-02T00:44:26.296291057Z" level=info msg="RemoveContainer for \"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a\" returns successfully" Jul 2 00:44:26.296595 kubelet[2025]: I0702 00:44:26.296557 2025 scope.go:117] "RemoveContainer" containerID="69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c" Jul 2 00:44:26.297540 env[1216]: time="2024-07-02T00:44:26.297510806Z" level=info msg="RemoveContainer for \"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c\"" Jul 2 00:44:26.300058 env[1216]: time="2024-07-02T00:44:26.300020741Z" level=info msg="RemoveContainer for \"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c\" returns successfully" Jul 2 00:44:26.300233 kubelet[2025]: I0702 00:44:26.300200 2025 scope.go:117] "RemoveContainer" containerID="5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579" Jul 2 00:44:26.301242 env[1216]: time="2024-07-02T00:44:26.301212691Z" level=info msg="RemoveContainer for \"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579\"" Jul 2 00:44:26.303580 env[1216]: time="2024-07-02T00:44:26.303549033Z" level=info msg="RemoveContainer for \"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579\" returns successfully" Jul 2 00:44:26.303743 kubelet[2025]: I0702 00:44:26.303723 2025 scope.go:117] "RemoveContainer" containerID="8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32" Jul 2 00:44:26.303977 env[1216]: time="2024-07-02T00:44:26.303919618Z" level=error msg="ContainerStatus for \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\": not found" Jul 2 00:44:26.304088 kubelet[2025]: E0702 00:44:26.304075 2025 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\": not found" containerID="8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32" Jul 2 00:44:26.304121 kubelet[2025]: I0702 00:44:26.304109 2025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32"} err="failed to get container status \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a0b8c59bcedb2b734debe3b0b8c4f0971a0b9ac08cd1c20f15774e22f756c32\": not found" Jul 2 00:44:26.304121 kubelet[2025]: I0702 00:44:26.304119 2025 scope.go:117] "RemoveContainer" containerID="bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670" Jul 2 00:44:26.304326 env[1216]: time="2024-07-02T00:44:26.304278723Z" level=error msg="ContainerStatus for \"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670\": not found" Jul 2 00:44:26.304445 kubelet[2025]: E0702 00:44:26.304429 2025 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670\": not found" containerID="bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670" Jul 2 00:44:26.304473 kubelet[2025]: I0702 00:44:26.304459 2025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670"} err="failed to get container status \"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc26493efca53a14b66709ddae751c28c3217ac3f6145e9d13e5c1c319f93670\": not found" Jul 2 00:44:26.304473 kubelet[2025]: I0702 00:44:26.304468 2025 scope.go:117] "RemoveContainer" containerID="3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a" Jul 2 00:44:26.304667 env[1216]: time="2024-07-02T00:44:26.304617628Z" level=error msg="ContainerStatus for \"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a\": not found" Jul 2 00:44:26.304773 kubelet[2025]: E0702 00:44:26.304761 2025 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a\": not found" containerID="3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a" Jul 2 00:44:26.304797 kubelet[2025]: I0702 00:44:26.304785 2025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a"} err="failed to get container status \"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3029a4b2fb8588a74833047b627d529748b1584c326a6cabf2ed2f8c7eafc51a\": not found" Jul 2 00:44:26.304797 kubelet[2025]: I0702 00:44:26.304794 2025 scope.go:117] "RemoveContainer" containerID="69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c" Jul 2 00:44:26.304982 env[1216]: time="2024-07-02T00:44:26.304935095Z" level=error msg="ContainerStatus for \"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c\": not found" Jul 2 00:44:26.305089 kubelet[2025]: E0702 00:44:26.305073 2025 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c\": not found" containerID="69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c" Jul 2 00:44:26.305115 kubelet[2025]: I0702 00:44:26.305102 2025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c"} err="failed to get container status \"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c\": rpc error: code = NotFound desc = an error occurred when try to find container \"69b6828fdcd6d86085afef37d72cd609cb8fec69e0c6e84f13a2df82fb7a793c\": not found" Jul 2 00:44:26.305115 kubelet[2025]: I0702 00:44:26.305115 2025 scope.go:117] "RemoveContainer" containerID="5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579" Jul 2 00:44:26.305321 env[1216]: time="2024-07-02T00:44:26.305274361Z" level=error msg="ContainerStatus for \"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579\": not found" Jul 2 00:44:26.305448 kubelet[2025]: E0702 00:44:26.305431 2025 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579\": not found" containerID="5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579" Jul 2 00:44:26.305481 kubelet[2025]: I0702 00:44:26.305461 2025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579"} err="failed to get container status \"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579\": rpc error: code = NotFound desc = an error occurred when try to find container \"5513961801034fc81b7dc93527e6933531eee3f12b4a11b677dbf1479f887579\": not found" Jul 2 00:44:26.901110 systemd[1]: var-lib-kubelet-pods-123230f8\x2db95c\x2d43f9\x2dae22\x2d20cd9dde7043-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmdhbm.mount: Deactivated successfully. Jul 2 00:44:26.901236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf-rootfs.mount: Deactivated successfully. Jul 2 00:44:26.901295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-685741a83c9c3cc5dad97643e32df1a303902e6130cf22dfcd99adca7a68b1cf-shm.mount: Deactivated successfully. Jul 2 00:44:26.901343 systemd[1]: var-lib-kubelet-pods-e12c83a6\x2dd6f9\x2d417f\x2d83b7\x2dfef0196df593-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dktpjl.mount: Deactivated successfully. Jul 2 00:44:26.901407 systemd[1]: var-lib-kubelet-pods-e12c83a6\x2dd6f9\x2d417f\x2d83b7\x2dfef0196df593-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:44:26.901477 systemd[1]: var-lib-kubelet-pods-e12c83a6\x2dd6f9\x2d417f\x2d83b7\x2dfef0196df593-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:44:27.089301 kubelet[2025]: I0702 00:44:27.089270 2025 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="123230f8-b95c-43f9-ae22-20cd9dde7043" path="/var/lib/kubelet/pods/123230f8-b95c-43f9-ae22-20cd9dde7043/volumes" Jul 2 00:44:27.089718 kubelet[2025]: I0702 00:44:27.089685 2025 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e12c83a6-d6f9-417f-83b7-fef0196df593" path="/var/lib/kubelet/pods/e12c83a6-d6f9-417f-83b7-fef0196df593/volumes" Jul 2 00:44:27.149017 kubelet[2025]: E0702 00:44:27.148988 2025 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:44:27.864358 sshd[3646]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:27.867891 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:47650.service. Jul 2 00:44:27.869006 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:47644.service: Deactivated successfully. Jul 2 00:44:27.869725 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:44:27.869881 systemd[1]: session-22.scope: Consumed 1.547s CPU time. Jul 2 00:44:27.878307 systemd-logind[1205]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:44:27.879506 systemd-logind[1205]: Removed session 22. Jul 2 00:44:27.910865 sshd[3806]: Accepted publickey for core from 10.0.0.1 port 47650 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:27.911960 sshd[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:27.916031 systemd-logind[1205]: New session 23 of user core. Jul 2 00:44:27.916480 systemd[1]: Started session-23.scope. Jul 2 00:44:28.919513 sshd[3806]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:28.922468 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:47650.service: Deactivated successfully. Jul 2 00:44:28.923149 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:44:28.923867 systemd-logind[1205]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:44:28.925043 systemd[1]: Started sshd@23-10.0.0.35:22-10.0.0.1:47654.service. Jul 2 00:44:28.929182 systemd-logind[1205]: Removed session 23. Jul 2 00:44:28.943952 kubelet[2025]: I0702 00:44:28.943920 2025 topology_manager.go:215] "Topology Admit Handler" podUID="7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" podNamespace="kube-system" podName="cilium-lg2wj" Jul 2 00:44:28.944360 kubelet[2025]: E0702 00:44:28.944334 2025 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e12c83a6-d6f9-417f-83b7-fef0196df593" containerName="clean-cilium-state" Jul 2 00:44:28.944434 kubelet[2025]: E0702 00:44:28.944424 2025 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e12c83a6-d6f9-417f-83b7-fef0196df593" containerName="mount-cgroup" Jul 2 00:44:28.944501 kubelet[2025]: E0702 00:44:28.944493 2025 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="123230f8-b95c-43f9-ae22-20cd9dde7043" containerName="cilium-operator" Jul 2 00:44:28.944584 kubelet[2025]: E0702 00:44:28.944575 2025 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e12c83a6-d6f9-417f-83b7-fef0196df593" containerName="apply-sysctl-overwrites" Jul 2 00:44:28.944671 kubelet[2025]: E0702 00:44:28.944662 2025 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e12c83a6-d6f9-417f-83b7-fef0196df593" containerName="mount-bpf-fs" Jul 2 00:44:28.944743 kubelet[2025]: E0702 00:44:28.944735 2025 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e12c83a6-d6f9-417f-83b7-fef0196df593" containerName="cilium-agent" Jul 2 00:44:28.944835 kubelet[2025]: I0702 00:44:28.944826 2025 memory_manager.go:354] "RemoveStaleState removing state" podUID="123230f8-b95c-43f9-ae22-20cd9dde7043" containerName="cilium-operator" Jul 2 00:44:28.944907 kubelet[2025]: I0702 00:44:28.944898 2025 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12c83a6-d6f9-417f-83b7-fef0196df593" containerName="cilium-agent" Jul 2 00:44:28.950877 kubelet[2025]: W0702 00:44:28.950843 2025 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:44:28.950877 kubelet[2025]: E0702 00:44:28.950882 2025 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 00:44:28.951799 systemd[1]: Created slice kubepods-burstable-pod7cef45c6_b4a4_42c8_89dc_0a7434c6bdad.slice. Jul 2 00:44:28.971787 sshd[3820]: Accepted publickey for core from 10.0.0.1 port 47654 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:28.973681 sshd[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:28.977344 systemd-logind[1205]: New session 24 of user core. Jul 2 00:44:28.978217 systemd[1]: Started session-24.scope. Jul 2 00:44:28.992613 kubelet[2025]: I0702 00:44:28.992576 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-host-proc-sys-net\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992613 kubelet[2025]: I0702 00:44:28.992622 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-host-proc-sys-kernel\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992744 kubelet[2025]: I0702 00:44:28.992644 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-hostproc\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992744 kubelet[2025]: I0702 00:44:28.992664 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-xtables-lock\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992744 kubelet[2025]: I0702 00:44:28.992684 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-config-path\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992744 kubelet[2025]: I0702 00:44:28.992703 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cni-path\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992744 kubelet[2025]: I0702 00:44:28.992722 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-ipsec-secrets\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992744 kubelet[2025]: I0702 00:44:28.992742 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-cgroup\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992900 kubelet[2025]: I0702 00:44:28.992761 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-hubble-tls\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992900 kubelet[2025]: I0702 00:44:28.992779 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-run\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992900 kubelet[2025]: I0702 00:44:28.992798 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-lib-modules\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992900 kubelet[2025]: I0702 00:44:28.992818 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44dcf\" (UniqueName: \"kubernetes.io/projected/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-kube-api-access-44dcf\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992900 kubelet[2025]: I0702 00:44:28.992837 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-bpf-maps\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.992900 kubelet[2025]: I0702 00:44:28.992858 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-etc-cni-netd\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:28.993033 kubelet[2025]: I0702 00:44:28.992877 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-clustermesh-secrets\") pod \"cilium-lg2wj\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " pod="kube-system/cilium-lg2wj" Jul 2 00:44:29.010330 kubelet[2025]: I0702 00:44:29.010309 2025 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:44:29Z","lastTransitionTime":"2024-07-02T00:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:44:29.117193 kubelet[2025]: E0702 00:44:29.115454 2025 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-ipsec-secrets], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-lg2wj" podUID="7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" Jul 2 00:44:29.119828 sshd[3820]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:29.122829 systemd[1]: sshd@23-10.0.0.35:22-10.0.0.1:47654.service: Deactivated successfully. Jul 2 00:44:29.124554 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:44:29.125259 systemd-logind[1205]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:44:29.127627 systemd[1]: Started sshd@24-10.0.0.35:22-10.0.0.1:47670.service. Jul 2 00:44:29.129480 systemd-logind[1205]: Removed session 24. Jul 2 00:44:29.169098 sshd[3836]: Accepted publickey for core from 10.0.0.1 port 47670 ssh2: RSA SHA256:p8Y1IuTS6TxJ481HRtC9DuXWW9Af2DGdhQUd1gde29c Jul 2 00:44:29.170742 sshd[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:44:29.175283 systemd-logind[1205]: New session 25 of user core. Jul 2 00:44:29.176212 systemd[1]: Started session-25.scope. Jul 2 00:44:29.310214 kubelet[2025]: I0702 00:44:29.310178 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-clustermesh-secrets\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310410 kubelet[2025]: I0702 00:44:29.310395 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-xtables-lock\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310487 kubelet[2025]: I0702 00:44:29.310476 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-lib-modules\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310565 kubelet[2025]: I0702 00:44:29.310553 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-host-proc-sys-net\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310644 kubelet[2025]: I0702 00:44:29.310612 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:29.310712 kubelet[2025]: I0702 00:44:29.310625 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-cgroup\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310712 kubelet[2025]: I0702 00:44:29.310685 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-run\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310712 kubelet[2025]: I0702 00:44:29.310710 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-host-proc-sys-kernel\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310783 kubelet[2025]: I0702 00:44:29.310732 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cni-path\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310783 kubelet[2025]: I0702 00:44:29.310763 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-config-path\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310783 kubelet[2025]: I0702 00:44:29.310782 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-etc-cni-netd\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310850 kubelet[2025]: I0702 00:44:29.310800 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-hostproc\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310850 kubelet[2025]: I0702 00:44:29.310829 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-hubble-tls\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310891 kubelet[2025]: I0702 00:44:29.310853 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44dcf\" (UniqueName: \"kubernetes.io/projected/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-kube-api-access-44dcf\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310891 kubelet[2025]: I0702 00:44:29.310870 2025 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-bpf-maps\") pod \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\" (UID: \"7cef45c6-b4a4-42c8-89dc-0a7434c6bdad\") " Jul 2 00:44:29.310938 kubelet[2025]: I0702 00:44:29.310930 2025 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.310961 kubelet[2025]: I0702 00:44:29.310952 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:29.310984 kubelet[2025]: I0702 00:44:29.310508 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:29.311007 kubelet[2025]: I0702 00:44:29.310984 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:29.311029 kubelet[2025]: I0702 00:44:29.311002 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:29.311029 kubelet[2025]: I0702 00:44:29.311021 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cni-path" (OuterVolumeSpecName: "cni-path") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:29.311120 kubelet[2025]: I0702 00:44:29.311102 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:29.311210 kubelet[2025]: I0702 00:44:29.311196 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-hostproc" (OuterVolumeSpecName: "hostproc") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:29.311305 kubelet[2025]: I0702 00:44:29.311279 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:29.311630 kubelet[2025]: I0702 00:44:29.310483 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:44:29.312958 kubelet[2025]: I0702 00:44:29.312909 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:44:29.315190 systemd[1]: var-lib-kubelet-pods-7cef45c6\x2db4a4\x2d42c8\x2d89dc\x2d0a7434c6bdad-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:44:29.315329 systemd[1]: var-lib-kubelet-pods-7cef45c6\x2db4a4\x2d42c8\x2d89dc\x2d0a7434c6bdad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d44dcf.mount: Deactivated successfully. Jul 2 00:44:29.315399 systemd[1]: var-lib-kubelet-pods-7cef45c6\x2db4a4\x2d42c8\x2d89dc\x2d0a7434c6bdad-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:44:29.317861 kubelet[2025]: I0702 00:44:29.317832 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:44:29.318180 kubelet[2025]: I0702 00:44:29.318082 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:44:29.318499 kubelet[2025]: I0702 00:44:29.318433 2025 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-kube-api-access-44dcf" (OuterVolumeSpecName: "kube-api-access-44dcf") pod "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad"). InnerVolumeSpecName "kube-api-access-44dcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:44:29.411714 kubelet[2025]: I0702 00:44:29.411677 2025 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.411884 kubelet[2025]: I0702 00:44:29.411874 2025 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.411979 kubelet[2025]: I0702 00:44:29.411965 2025 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.412047 kubelet[2025]: I0702 00:44:29.412038 2025 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.412106 kubelet[2025]: I0702 00:44:29.412096 2025 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.412217 kubelet[2025]: I0702 00:44:29.412205 2025 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.412352 kubelet[2025]: I0702 00:44:29.412338 2025 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.412412 kubelet[2025]: I0702 00:44:29.412403 2025 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.412469 kubelet[2025]: I0702 00:44:29.412460 2025 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-44dcf\" (UniqueName: \"kubernetes.io/projected/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-kube-api-access-44dcf\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.412527 kubelet[2025]: I0702 00:44:29.412518 2025 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.412593 kubelet[2025]: I0702 00:44:29.412583 2025 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.412651 kubelet[2025]: I0702 00:44:29.412642 2025 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:29.412713 kubelet[2025]: I0702 00:44:29.412704 2025 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:30.086940 kubelet[2025]: E0702 00:44:30.086892 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:30.108519 kubelet[2025]: E0702 00:44:30.108466 2025 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 2 00:44:30.109660 kubelet[2025]: E0702 00:44:30.109625 2025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-ipsec-secrets podName:7cef45c6-b4a4-42c8-89dc-0a7434c6bdad nodeName:}" failed. No retries permitted until 2024-07-02 00:44:30.608552583 +0000 UTC m=+83.630921071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-ipsec-secrets") pod "cilium-lg2wj" (UID: "7cef45c6-b4a4-42c8-89dc-0a7434c6bdad") : failed to sync secret cache: timed out waiting for the condition Jul 2 00:44:30.278826 systemd[1]: Removed slice kubepods-burstable-pod7cef45c6_b4a4_42c8_89dc_0a7434c6bdad.slice. Jul 2 00:44:30.310218 kubelet[2025]: I0702 00:44:30.310155 2025 topology_manager.go:215] "Topology Admit Handler" podUID="0995b039-0eed-4be8-bd96-d047e9edf355" podNamespace="kube-system" podName="cilium-62rpt" Jul 2 00:44:30.315974 systemd[1]: Created slice kubepods-burstable-pod0995b039_0eed_4be8_bd96_d047e9edf355.slice. Jul 2 00:44:30.420647 kubelet[2025]: I0702 00:44:30.420545 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0995b039-0eed-4be8-bd96-d047e9edf355-hubble-tls\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.420834 kubelet[2025]: I0702 00:44:30.420819 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0995b039-0eed-4be8-bd96-d047e9edf355-bpf-maps\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.420921 kubelet[2025]: I0702 00:44:30.420910 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0995b039-0eed-4be8-bd96-d047e9edf355-etc-cni-netd\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421006 kubelet[2025]: I0702 00:44:30.420995 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0995b039-0eed-4be8-bd96-d047e9edf355-host-proc-sys-net\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421088 kubelet[2025]: I0702 00:44:30.421076 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0995b039-0eed-4be8-bd96-d047e9edf355-cilium-ipsec-secrets\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421192 kubelet[2025]: I0702 00:44:30.421179 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0995b039-0eed-4be8-bd96-d047e9edf355-host-proc-sys-kernel\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421301 kubelet[2025]: I0702 00:44:30.421287 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp5z4\" (UniqueName: \"kubernetes.io/projected/0995b039-0eed-4be8-bd96-d047e9edf355-kube-api-access-rp5z4\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421442 kubelet[2025]: I0702 00:44:30.421415 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0995b039-0eed-4be8-bd96-d047e9edf355-cni-path\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421499 kubelet[2025]: I0702 00:44:30.421454 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0995b039-0eed-4be8-bd96-d047e9edf355-cilium-run\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421499 kubelet[2025]: I0702 00:44:30.421475 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0995b039-0eed-4be8-bd96-d047e9edf355-xtables-lock\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421499 kubelet[2025]: I0702 00:44:30.421494 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0995b039-0eed-4be8-bd96-d047e9edf355-clustermesh-secrets\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421577 kubelet[2025]: I0702 00:44:30.421513 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0995b039-0eed-4be8-bd96-d047e9edf355-cilium-config-path\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421577 kubelet[2025]: I0702 00:44:30.421532 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0995b039-0eed-4be8-bd96-d047e9edf355-hostproc\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421577 kubelet[2025]: I0702 00:44:30.421550 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0995b039-0eed-4be8-bd96-d047e9edf355-cilium-cgroup\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421577 kubelet[2025]: I0702 00:44:30.421575 2025 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0995b039-0eed-4be8-bd96-d047e9edf355-lib-modules\") pod \"cilium-62rpt\" (UID: \"0995b039-0eed-4be8-bd96-d047e9edf355\") " pod="kube-system/cilium-62rpt" Jul 2 00:44:30.421664 kubelet[2025]: I0702 00:44:30.421598 2025 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 00:44:30.618033 kubelet[2025]: E0702 00:44:30.617955 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:30.618584 env[1216]: time="2024-07-02T00:44:30.618508181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-62rpt,Uid:0995b039-0eed-4be8-bd96-d047e9edf355,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:30.633419 env[1216]: time="2024-07-02T00:44:30.633341959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:30.633419 env[1216]: time="2024-07-02T00:44:30.633379838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:30.633419 env[1216]: time="2024-07-02T00:44:30.633390637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:30.633664 env[1216]: time="2024-07-02T00:44:30.633568912Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4 pid=3861 runtime=io.containerd.runc.v2 Jul 2 00:44:30.643394 systemd[1]: Started cri-containerd-779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4.scope. Jul 2 00:44:30.675908 env[1216]: time="2024-07-02T00:44:30.674716710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-62rpt,Uid:0995b039-0eed-4be8-bd96-d047e9edf355,Namespace:kube-system,Attempt:0,} returns sandbox id \"779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4\"" Jul 2 00:44:30.676644 kubelet[2025]: E0702 00:44:30.676592 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:30.679015 env[1216]: time="2024-07-02T00:44:30.678985817Z" level=info msg="CreateContainer within sandbox \"779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:44:30.689018 env[1216]: time="2024-07-02T00:44:30.688976066Z" level=info msg="CreateContainer within sandbox \"779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c42e1c94d2f6cf3f7476ca0ec5e67a35fa972d8ce2133d65be19ee1a3ca37c4b\"" Jul 2 00:44:30.689796 env[1216]: time="2024-07-02T00:44:30.689770361Z" level=info msg="StartContainer for \"c42e1c94d2f6cf3f7476ca0ec5e67a35fa972d8ce2133d65be19ee1a3ca37c4b\"" Jul 2 00:44:30.703843 systemd[1]: Started cri-containerd-c42e1c94d2f6cf3f7476ca0ec5e67a35fa972d8ce2133d65be19ee1a3ca37c4b.scope. Jul 2 00:44:30.743293 env[1216]: time="2024-07-02T00:44:30.741600387Z" level=info msg="StartContainer for \"c42e1c94d2f6cf3f7476ca0ec5e67a35fa972d8ce2133d65be19ee1a3ca37c4b\" returns successfully" Jul 2 00:44:30.750490 systemd[1]: cri-containerd-c42e1c94d2f6cf3f7476ca0ec5e67a35fa972d8ce2133d65be19ee1a3ca37c4b.scope: Deactivated successfully. Jul 2 00:44:30.802851 env[1216]: time="2024-07-02T00:44:30.802793641Z" level=info msg="shim disconnected" id=c42e1c94d2f6cf3f7476ca0ec5e67a35fa972d8ce2133d65be19ee1a3ca37c4b Jul 2 00:44:30.802851 env[1216]: time="2024-07-02T00:44:30.802844520Z" level=warning msg="cleaning up after shim disconnected" id=c42e1c94d2f6cf3f7476ca0ec5e67a35fa972d8ce2133d65be19ee1a3ca37c4b namespace=k8s.io Jul 2 00:44:30.802851 env[1216]: time="2024-07-02T00:44:30.802857559Z" level=info msg="cleaning up dead shim" Jul 2 00:44:30.813286 env[1216]: time="2024-07-02T00:44:30.813067001Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3946 runtime=io.containerd.runc.v2\n" Jul 2 00:44:31.089582 kubelet[2025]: I0702 00:44:31.089415 2025 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7cef45c6-b4a4-42c8-89dc-0a7434c6bdad" path="/var/lib/kubelet/pods/7cef45c6-b4a4-42c8-89dc-0a7434c6bdad/volumes" Jul 2 00:44:31.279195 kubelet[2025]: E0702 00:44:31.278921 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:31.280852 env[1216]: time="2024-07-02T00:44:31.280816805Z" level=info msg="CreateContainer within sandbox \"779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:44:31.290460 env[1216]: time="2024-07-02T00:44:31.290402811Z" level=info msg="CreateContainer within sandbox \"779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"03e9c6329cc330701573be10e0474a244f8af71646f9dcbdac75028fa74086ac\"" Jul 2 00:44:31.291046 env[1216]: time="2024-07-02T00:44:31.291003913Z" level=info msg="StartContainer for \"03e9c6329cc330701573be10e0474a244f8af71646f9dcbdac75028fa74086ac\"" Jul 2 00:44:31.306747 systemd[1]: Started cri-containerd-03e9c6329cc330701573be10e0474a244f8af71646f9dcbdac75028fa74086ac.scope. Jul 2 00:44:31.342616 env[1216]: time="2024-07-02T00:44:31.342210245Z" level=info msg="StartContainer for \"03e9c6329cc330701573be10e0474a244f8af71646f9dcbdac75028fa74086ac\" returns successfully" Jul 2 00:44:31.348744 systemd[1]: cri-containerd-03e9c6329cc330701573be10e0474a244f8af71646f9dcbdac75028fa74086ac.scope: Deactivated successfully. Jul 2 00:44:31.378692 env[1216]: time="2024-07-02T00:44:31.378635321Z" level=info msg="shim disconnected" id=03e9c6329cc330701573be10e0474a244f8af71646f9dcbdac75028fa74086ac Jul 2 00:44:31.378692 env[1216]: time="2024-07-02T00:44:31.378678320Z" level=warning msg="cleaning up after shim disconnected" id=03e9c6329cc330701573be10e0474a244f8af71646f9dcbdac75028fa74086ac namespace=k8s.io Jul 2 00:44:31.378692 env[1216]: time="2024-07-02T00:44:31.378687080Z" level=info msg="cleaning up dead shim" Jul 2 00:44:31.385692 env[1216]: time="2024-07-02T00:44:31.385648680Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4006 runtime=io.containerd.runc.v2\n" Jul 2 00:44:32.149884 kubelet[2025]: E0702 00:44:32.149839 2025 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:44:32.282514 kubelet[2025]: E0702 00:44:32.282486 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:32.284598 env[1216]: time="2024-07-02T00:44:32.284557313Z" level=info msg="CreateContainer within sandbox \"779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:44:32.305050 env[1216]: time="2024-07-02T00:44:32.305004176Z" level=info msg="CreateContainer within sandbox \"779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7423e6a05f1cf74e46006bf9bccd01a0ea518687359da3a447f90e88d0c9576c\"" Jul 2 00:44:32.305668 env[1216]: time="2024-07-02T00:44:32.305643759Z" level=info msg="StartContainer for \"7423e6a05f1cf74e46006bf9bccd01a0ea518687359da3a447f90e88d0c9576c\"" Jul 2 00:44:32.320509 systemd[1]: Started cri-containerd-7423e6a05f1cf74e46006bf9bccd01a0ea518687359da3a447f90e88d0c9576c.scope. Jul 2 00:44:32.354235 env[1216]: time="2024-07-02T00:44:32.354182964Z" level=info msg="StartContainer for \"7423e6a05f1cf74e46006bf9bccd01a0ea518687359da3a447f90e88d0c9576c\" returns successfully" Jul 2 00:44:32.358244 systemd[1]: cri-containerd-7423e6a05f1cf74e46006bf9bccd01a0ea518687359da3a447f90e88d0c9576c.scope: Deactivated successfully. Jul 2 00:44:32.378912 env[1216]: time="2024-07-02T00:44:32.378861156Z" level=info msg="shim disconnected" id=7423e6a05f1cf74e46006bf9bccd01a0ea518687359da3a447f90e88d0c9576c Jul 2 00:44:32.378912 env[1216]: time="2024-07-02T00:44:32.378908475Z" level=warning msg="cleaning up after shim disconnected" id=7423e6a05f1cf74e46006bf9bccd01a0ea518687359da3a447f90e88d0c9576c namespace=k8s.io Jul 2 00:44:32.378912 env[1216]: time="2024-07-02T00:44:32.378918115Z" level=info msg="cleaning up dead shim" Jul 2 00:44:32.386537 env[1216]: time="2024-07-02T00:44:32.386488676Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4065 runtime=io.containerd.runc.v2\n" Jul 2 00:44:32.526301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7423e6a05f1cf74e46006bf9bccd01a0ea518687359da3a447f90e88d0c9576c-rootfs.mount: Deactivated successfully. Jul 2 00:44:33.286300 kubelet[2025]: E0702 00:44:33.286269 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:33.288268 env[1216]: time="2024-07-02T00:44:33.288213863Z" level=info msg="CreateContainer within sandbox \"779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:44:33.300207 env[1216]: time="2024-07-02T00:44:33.300154298Z" level=info msg="CreateContainer within sandbox \"779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b25cf1aee1fab9d815029f294be8ebd875e76d10c1ce3a4acbab4089796ed5b\"" Jul 2 00:44:33.300818 env[1216]: time="2024-07-02T00:44:33.300793362Z" level=info msg="StartContainer for \"2b25cf1aee1fab9d815029f294be8ebd875e76d10c1ce3a4acbab4089796ed5b\"" Jul 2 00:44:33.318566 systemd[1]: Started cri-containerd-2b25cf1aee1fab9d815029f294be8ebd875e76d10c1ce3a4acbab4089796ed5b.scope. Jul 2 00:44:33.352272 systemd[1]: cri-containerd-2b25cf1aee1fab9d815029f294be8ebd875e76d10c1ce3a4acbab4089796ed5b.scope: Deactivated successfully. Jul 2 00:44:33.356220 env[1216]: time="2024-07-02T00:44:33.356176917Z" level=info msg="StartContainer for \"2b25cf1aee1fab9d815029f294be8ebd875e76d10c1ce3a4acbab4089796ed5b\" returns successfully" Jul 2 00:44:33.378013 env[1216]: time="2024-07-02T00:44:33.377961475Z" level=info msg="shim disconnected" id=2b25cf1aee1fab9d815029f294be8ebd875e76d10c1ce3a4acbab4089796ed5b Jul 2 00:44:33.378013 env[1216]: time="2024-07-02T00:44:33.378009074Z" level=warning msg="cleaning up after shim disconnected" id=2b25cf1aee1fab9d815029f294be8ebd875e76d10c1ce3a4acbab4089796ed5b namespace=k8s.io Jul 2 00:44:33.378013 env[1216]: time="2024-07-02T00:44:33.378019154Z" level=info msg="cleaning up dead shim" Jul 2 00:44:33.385247 env[1216]: time="2024-07-02T00:44:33.385199622Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:44:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4120 runtime=io.containerd.runc.v2\n" Jul 2 00:44:33.526369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b25cf1aee1fab9d815029f294be8ebd875e76d10c1ce3a4acbab4089796ed5b-rootfs.mount: Deactivated successfully. Jul 2 00:44:34.290198 kubelet[2025]: E0702 00:44:34.290160 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:34.292169 env[1216]: time="2024-07-02T00:44:34.292104488Z" level=info msg="CreateContainer within sandbox \"779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:44:34.307311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343920358.mount: Deactivated successfully. Jul 2 00:44:34.308236 env[1216]: time="2024-07-02T00:44:34.308198259Z" level=info msg="CreateContainer within sandbox \"779027b808bafc60fe85e977f828f75c0a0df9ad502bcd1489dbd25220f503a4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3dd63d875b783ac7023d85f550bd48f2689ad6553df05fdacc87bf4d73e1f342\"" Jul 2 00:44:34.308941 env[1216]: time="2024-07-02T00:44:34.308909124Z" level=info msg="StartContainer for \"3dd63d875b783ac7023d85f550bd48f2689ad6553df05fdacc87bf4d73e1f342\"" Jul 2 00:44:34.327404 systemd[1]: Started cri-containerd-3dd63d875b783ac7023d85f550bd48f2689ad6553df05fdacc87bf4d73e1f342.scope. Jul 2 00:44:34.358973 env[1216]: time="2024-07-02T00:44:34.358923320Z" level=info msg="StartContainer for \"3dd63d875b783ac7023d85f550bd48f2689ad6553df05fdacc87bf4d73e1f342\" returns successfully" Jul 2 00:44:34.631208 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 2 00:44:35.294663 kubelet[2025]: E0702 00:44:35.294609 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:35.430450 systemd[1]: run-containerd-runc-k8s.io-3dd63d875b783ac7023d85f550bd48f2689ad6553df05fdacc87bf4d73e1f342-runc.oVdIXC.mount: Deactivated successfully. Jul 2 00:44:36.619209 kubelet[2025]: E0702 00:44:36.619166 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:37.366538 systemd-networkd[1041]: lxc_health: Link UP Jul 2 00:44:37.371178 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:44:37.372271 systemd-networkd[1041]: lxc_health: Gained carrier Jul 2 00:44:37.546268 systemd[1]: run-containerd-runc-k8s.io-3dd63d875b783ac7023d85f550bd48f2689ad6553df05fdacc87bf4d73e1f342-runc.2QBt3F.mount: Deactivated successfully. Jul 2 00:44:38.619808 kubelet[2025]: E0702 00:44:38.619769 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:38.637443 kubelet[2025]: I0702 00:44:38.637397 2025 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-62rpt" podStartSLOduration=8.637345653 podStartE2EDuration="8.637345653s" podCreationTimestamp="2024-07-02 00:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:44:35.312671802 +0000 UTC m=+88.335040330" watchObservedRunningTime="2024-07-02 00:44:38.637345653 +0000 UTC m=+91.659714181" Jul 2 00:44:39.022282 systemd-networkd[1041]: lxc_health: Gained IPv6LL Jul 2 00:44:39.301869 kubelet[2025]: E0702 00:44:39.301742 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:40.303637 kubelet[2025]: E0702 00:44:40.303588 2025 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:44:44.011737 sshd[3836]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:44.014894 systemd[1]: sshd@24-10.0.0.35:22-10.0.0.1:47670.service: Deactivated successfully. Jul 2 00:44:44.015645 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:44:44.016218 systemd-logind[1205]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:44:44.017248 systemd-logind[1205]: Removed session 25.