Sep 13 01:32:56.054144 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 01:32:56.054163 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 01:32:56.054171 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 13 01:32:56.054178 kernel: printk: bootconsole [pl11] enabled Sep 13 01:32:56.054183 kernel: efi: EFI v2.70 by EDK II Sep 13 01:32:56.054189 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Sep 13 01:32:56.054195 kernel: random: crng init done Sep 13 01:32:56.054201 kernel: ACPI: Early table checksum verification disabled Sep 13 01:32:56.054206 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 13 01:32:56.054212 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:56.054217 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:56.054223 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 13 01:32:56.054229 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:56.054235 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:56.054241 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:56.054247 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:56.054253 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:56.054260 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:56.054266 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 13 01:32:56.054271 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:56.054277 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 13 01:32:56.054283 kernel: NUMA: Failed to initialise from firmware Sep 13 01:32:56.054289 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 01:32:56.054295 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Sep 13 01:32:56.054300 kernel: Zone ranges: Sep 13 01:32:56.054306 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 13 01:32:56.054312 kernel: DMA32 empty Sep 13 01:32:56.054317 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 01:32:56.054324 kernel: Movable zone start for each node Sep 13 01:32:56.054329 kernel: Early memory node ranges Sep 13 01:32:56.054335 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 13 01:32:56.054341 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 13 01:32:56.054346 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 13 01:32:56.054352 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 13 01:32:56.054357 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 13 01:32:56.054363 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 13 01:32:56.054369 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 01:32:56.054374 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 01:32:56.054380 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 13 01:32:56.054386 kernel: psci: probing for conduit method from ACPI. Sep 13 01:32:56.054395 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 01:32:56.054401 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 01:32:56.054407 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 13 01:32:56.054413 kernel: psci: SMC Calling Convention v1.4 Sep 13 01:32:56.054419 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Sep 13 01:32:56.054426 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Sep 13 01:32:56.054432 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 01:32:56.054438 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 01:32:56.054445 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 01:32:56.054451 kernel: Detected PIPT I-cache on CPU0 Sep 13 01:32:56.054472 kernel: CPU features: detected: GIC system register CPU interface Sep 13 01:32:56.054480 kernel: CPU features: detected: Hardware dirty bit management Sep 13 01:32:56.054486 kernel: CPU features: detected: Spectre-BHB Sep 13 01:32:56.054492 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 01:32:56.054498 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 01:32:56.054504 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 01:32:56.054513 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 13 01:32:56.054519 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 01:32:56.054525 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 13 01:32:56.054531 kernel: Policy zone: Normal Sep 13 01:32:56.054539 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 01:32:56.054546 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 01:32:56.054552 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 01:32:56.054558 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 01:32:56.054564 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 01:32:56.054570 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Sep 13 01:32:56.054577 kernel: Memory: 3986880K/4194160K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 207280K reserved, 0K cma-reserved) Sep 13 01:32:56.054585 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 01:32:56.054591 kernel: trace event string verifier disabled Sep 13 01:32:56.054597 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 01:32:56.054604 kernel: rcu: RCU event tracing is enabled. Sep 13 01:32:56.054610 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 01:32:56.054616 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 01:32:56.054623 kernel: Tracing variant of Tasks RCU enabled. Sep 13 01:32:56.054629 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 01:32:56.054635 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 01:32:56.054641 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 01:32:56.054647 kernel: GICv3: 960 SPIs implemented Sep 13 01:32:56.054654 kernel: GICv3: 0 Extended SPIs implemented Sep 13 01:32:56.054660 kernel: GICv3: Distributor has no Range Selector support Sep 13 01:32:56.054666 kernel: Root IRQ handler: gic_handle_irq Sep 13 01:32:56.054672 kernel: GICv3: 16 PPIs implemented Sep 13 01:32:56.054678 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 13 01:32:56.054684 kernel: ITS: No ITS available, not enabling LPIs Sep 13 01:32:56.054691 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 01:32:56.054697 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 01:32:56.054703 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 01:32:56.054709 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 01:32:56.054715 kernel: Console: colour dummy device 80x25 Sep 13 01:32:56.054723 kernel: printk: console [tty1] enabled Sep 13 01:32:56.054729 kernel: ACPI: Core revision 20210730 Sep 13 01:32:56.054736 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 01:32:56.054743 kernel: pid_max: default: 32768 minimum: 301 Sep 13 01:32:56.054749 kernel: LSM: Security Framework initializing Sep 13 01:32:56.054755 kernel: SELinux: Initializing. Sep 13 01:32:56.054761 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 01:32:56.054768 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 01:32:56.054774 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 13 01:32:56.054781 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 13 01:32:56.054788 kernel: rcu: Hierarchical SRCU implementation. Sep 13 01:32:56.054794 kernel: Remapping and enabling EFI services. Sep 13 01:32:56.054800 kernel: smp: Bringing up secondary CPUs ... Sep 13 01:32:56.054807 kernel: Detected PIPT I-cache on CPU1 Sep 13 01:32:56.054813 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 13 01:32:56.054819 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 01:32:56.054826 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 01:32:56.054832 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 01:32:56.054838 kernel: SMP: Total of 2 processors activated. Sep 13 01:32:56.054846 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 01:32:56.054853 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 13 01:32:56.054859 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 01:32:56.054866 kernel: CPU features: detected: CRC32 instructions Sep 13 01:32:56.054872 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 01:32:56.054878 kernel: CPU features: detected: LSE atomic instructions Sep 13 01:32:56.054884 kernel: CPU features: detected: Privileged Access Never Sep 13 01:32:56.054891 kernel: CPU: All CPU(s) started at EL1 Sep 13 01:32:56.054897 kernel: alternatives: patching kernel code Sep 13 01:32:56.054904 kernel: devtmpfs: initialized Sep 13 01:32:56.054915 kernel: KASLR enabled Sep 13 01:32:56.054921 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 01:32:56.054930 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 01:32:56.054936 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 01:32:56.054943 kernel: SMBIOS 3.1.0 present. Sep 13 01:32:56.054949 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 13 01:32:56.054956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 01:32:56.054963 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 01:32:56.054971 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 01:32:56.054978 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 01:32:56.054985 kernel: audit: initializing netlink subsys (disabled) Sep 13 01:32:56.054992 kernel: audit: type=2000 audit(0.089:1): state=initialized audit_enabled=0 res=1 Sep 13 01:32:56.054999 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 01:32:56.055005 kernel: cpuidle: using governor menu Sep 13 01:32:56.055012 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 01:32:56.055019 kernel: ASID allocator initialised with 32768 entries Sep 13 01:32:56.055026 kernel: ACPI: bus type PCI registered Sep 13 01:32:56.055033 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 01:32:56.055040 kernel: Serial: AMBA PL011 UART driver Sep 13 01:32:56.055046 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 01:32:56.055053 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 01:32:56.055059 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 01:32:56.055066 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 01:32:56.055072 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 01:32:56.055080 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 01:32:56.055087 kernel: ACPI: Added _OSI(Module Device) Sep 13 01:32:56.055094 kernel: ACPI: Added _OSI(Processor Device) Sep 13 01:32:56.055100 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 01:32:56.055107 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 01:32:56.055113 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 01:32:56.055120 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 01:32:56.055127 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 01:32:56.055133 kernel: ACPI: Interpreter enabled Sep 13 01:32:56.055141 kernel: ACPI: Using GIC for interrupt routing Sep 13 01:32:56.055148 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 13 01:32:56.055155 kernel: printk: console [ttyAMA0] enabled Sep 13 01:32:56.055162 kernel: printk: bootconsole [pl11] disabled Sep 13 01:32:56.055168 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 13 01:32:56.055175 kernel: iommu: Default domain type: Translated Sep 13 01:32:56.055182 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 01:32:56.055189 kernel: vgaarb: loaded Sep 13 01:32:56.055195 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 01:32:56.055202 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 01:32:56.055210 kernel: PTP clock support registered Sep 13 01:32:56.055216 kernel: Registered efivars operations Sep 13 01:32:56.055222 kernel: No ACPI PMU IRQ for CPU0 Sep 13 01:32:56.055229 kernel: No ACPI PMU IRQ for CPU1 Sep 13 01:32:56.055236 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 01:32:56.055242 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 01:32:56.055249 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 01:32:56.055256 kernel: pnp: PnP ACPI init Sep 13 01:32:56.055262 kernel: pnp: PnP ACPI: found 0 devices Sep 13 01:32:56.055270 kernel: NET: Registered PF_INET protocol family Sep 13 01:32:56.055277 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 01:32:56.055284 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 01:32:56.055291 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 01:32:56.055297 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 01:32:56.055304 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 01:32:56.055311 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 01:32:56.055318 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 01:32:56.055326 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 01:32:56.055332 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 01:32:56.055339 kernel: PCI: CLS 0 bytes, default 64 Sep 13 01:32:56.055346 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 13 01:32:56.055352 kernel: kvm [1]: HYP mode not available Sep 13 01:32:56.055359 kernel: Initialise system trusted keyrings Sep 13 01:32:56.055366 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 01:32:56.055372 kernel: Key type asymmetric registered Sep 13 01:32:56.055379 kernel: Asymmetric key parser 'x509' registered Sep 13 01:32:56.055387 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 01:32:56.055394 kernel: io scheduler mq-deadline registered Sep 13 01:32:56.055400 kernel: io scheduler kyber registered Sep 13 01:32:56.055407 kernel: io scheduler bfq registered Sep 13 01:32:56.055413 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 01:32:56.055420 kernel: thunder_xcv, ver 1.0 Sep 13 01:32:56.055426 kernel: thunder_bgx, ver 1.0 Sep 13 01:32:56.055433 kernel: nicpf, ver 1.0 Sep 13 01:32:56.055439 kernel: nicvf, ver 1.0 Sep 13 01:32:56.055576 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 01:32:56.055640 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T01:32:55 UTC (1757727175) Sep 13 01:32:56.055650 kernel: efifb: probing for efifb Sep 13 01:32:56.055657 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 01:32:56.055664 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 01:32:56.055670 kernel: efifb: scrolling: redraw Sep 13 01:32:56.055677 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 01:32:56.055683 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 01:32:56.055691 kernel: fb0: EFI VGA frame buffer device Sep 13 01:32:56.055698 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 13 01:32:56.055705 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 01:32:56.055712 kernel: NET: Registered PF_INET6 protocol family Sep 13 01:32:56.055718 kernel: Segment Routing with IPv6 Sep 13 01:32:56.055725 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 01:32:56.055732 kernel: NET: Registered PF_PACKET protocol family Sep 13 01:32:56.055738 kernel: Key type dns_resolver registered Sep 13 01:32:56.055745 kernel: registered taskstats version 1 Sep 13 01:32:56.055751 kernel: Loading compiled-in X.509 certificates Sep 13 01:32:56.055759 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 01:32:56.055766 kernel: Key type .fscrypt registered Sep 13 01:32:56.055773 kernel: Key type fscrypt-provisioning registered Sep 13 01:32:56.055780 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 01:32:56.055787 kernel: ima: Allocated hash algorithm: sha1 Sep 13 01:32:56.055793 kernel: ima: No architecture policies found Sep 13 01:32:56.055800 kernel: clk: Disabling unused clocks Sep 13 01:32:56.055806 kernel: Freeing unused kernel memory: 36416K Sep 13 01:32:56.055814 kernel: Run /init as init process Sep 13 01:32:56.055821 kernel: with arguments: Sep 13 01:32:56.055827 kernel: /init Sep 13 01:32:56.055833 kernel: with environment: Sep 13 01:32:56.055840 kernel: HOME=/ Sep 13 01:32:56.055846 kernel: TERM=linux Sep 13 01:32:56.055853 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 01:32:56.055861 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:32:56.055872 systemd[1]: Detected virtualization microsoft. Sep 13 01:32:56.055879 systemd[1]: Detected architecture arm64. Sep 13 01:32:56.055886 systemd[1]: Running in initrd. Sep 13 01:32:56.055893 systemd[1]: No hostname configured, using default hostname. Sep 13 01:32:56.055900 systemd[1]: Hostname set to . Sep 13 01:32:56.055908 systemd[1]: Initializing machine ID from random generator. Sep 13 01:32:56.055915 systemd[1]: Queued start job for default target initrd.target. Sep 13 01:32:56.055922 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:32:56.055930 systemd[1]: Reached target cryptsetup.target. Sep 13 01:32:56.055937 systemd[1]: Reached target paths.target. Sep 13 01:32:56.055944 systemd[1]: Reached target slices.target. Sep 13 01:32:56.055951 systemd[1]: Reached target swap.target. Sep 13 01:32:56.055958 systemd[1]: Reached target timers.target. Sep 13 01:32:56.055965 systemd[1]: Listening on iscsid.socket. Sep 13 01:32:56.055972 systemd[1]: Listening on iscsiuio.socket. Sep 13 01:32:56.055979 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 01:32:56.055988 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 01:32:56.055995 systemd[1]: Listening on systemd-journald.socket. Sep 13 01:32:56.056002 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:32:56.056009 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:32:56.056016 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:32:56.056024 systemd[1]: Reached target sockets.target. Sep 13 01:32:56.056031 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:32:56.056038 systemd[1]: Finished network-cleanup.service. Sep 13 01:32:56.056045 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 01:32:56.056053 systemd[1]: Starting systemd-journald.service... Sep 13 01:32:56.056061 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:32:56.056068 systemd[1]: Starting systemd-resolved.service... Sep 13 01:32:56.056075 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 01:32:56.056087 systemd-journald[276]: Journal started Sep 13 01:32:56.056126 systemd-journald[276]: Runtime Journal (/run/log/journal/17310b7639864f97bf94494df5b659f1) is 8.0M, max 78.5M, 70.5M free. Sep 13 01:32:56.035800 systemd-modules-load[277]: Inserted module 'overlay' Sep 13 01:32:56.082481 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 01:32:56.088746 systemd-resolved[278]: Positive Trust Anchors: Sep 13 01:32:56.104766 kernel: Bridge firewalling registered Sep 13 01:32:56.104791 systemd[1]: Started systemd-journald.service. Sep 13 01:32:56.088762 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:32:56.088793 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:32:56.194594 kernel: SCSI subsystem initialized Sep 13 01:32:56.194618 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 01:32:56.194628 kernel: audit: type=1130 audit(1757727176.125:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.194637 kernel: device-mapper: uevent: version 1.0.3 Sep 13 01:32:56.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.091006 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 13 01:32:56.226535 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 01:32:56.226555 kernel: audit: type=1130 audit(1757727176.202:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.091024 systemd-modules-load[277]: Inserted module 'br_netfilter' Sep 13 01:32:56.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.159244 systemd[1]: Started systemd-resolved.service. Sep 13 01:32:56.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.207218 systemd-modules-load[277]: Inserted module 'dm_multipath' Sep 13 01:32:56.286722 kernel: audit: type=1130 audit(1757727176.231:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.286742 kernel: audit: type=1130 audit(1757727176.257:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.209511 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:32:56.315540 kernel: audit: type=1130 audit(1757727176.292:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.231609 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 01:32:56.257719 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:32:56.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.292708 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 01:32:56.357043 kernel: audit: type=1130 audit(1757727176.317:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.317479 systemd[1]: Reached target nss-lookup.target. Sep 13 01:32:56.344280 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 01:32:56.353114 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:32:56.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.362404 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:32:56.377927 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:32:56.446375 kernel: audit: type=1130 audit(1757727176.389:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.446398 kernel: audit: type=1130 audit(1757727176.415:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.390835 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 01:32:56.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.415853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:32:56.477736 kernel: audit: type=1130 audit(1757727176.446:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.466663 systemd[1]: Starting dracut-cmdline.service... Sep 13 01:32:56.485621 dracut-cmdline[299]: dracut-dracut-053 Sep 13 01:32:56.490342 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 01:32:56.571486 kernel: Loading iSCSI transport class v2.0-870. Sep 13 01:32:56.583481 kernel: iscsi: registered transport (tcp) Sep 13 01:32:56.604986 kernel: iscsi: registered transport (qla4xxx) Sep 13 01:32:56.605051 kernel: QLogic iSCSI HBA Driver Sep 13 01:32:56.635302 systemd[1]: Finished dracut-cmdline.service. Sep 13 01:32:56.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.641024 systemd[1]: Starting dracut-pre-udev.service... Sep 13 01:32:56.693480 kernel: raid6: neonx8 gen() 13743 MB/s Sep 13 01:32:56.713470 kernel: raid6: neonx8 xor() 10802 MB/s Sep 13 01:32:56.733469 kernel: raid6: neonx4 gen() 13543 MB/s Sep 13 01:32:56.754470 kernel: raid6: neonx4 xor() 11302 MB/s Sep 13 01:32:56.774477 kernel: raid6: neonx2 gen() 12947 MB/s Sep 13 01:32:56.794470 kernel: raid6: neonx2 xor() 10238 MB/s Sep 13 01:32:56.815472 kernel: raid6: neonx1 gen() 10532 MB/s Sep 13 01:32:56.835468 kernel: raid6: neonx1 xor() 8781 MB/s Sep 13 01:32:56.855468 kernel: raid6: int64x8 gen() 6276 MB/s Sep 13 01:32:56.877468 kernel: raid6: int64x8 xor() 3544 MB/s Sep 13 01:32:56.898468 kernel: raid6: int64x4 gen() 7198 MB/s Sep 13 01:32:56.919468 kernel: raid6: int64x4 xor() 3855 MB/s Sep 13 01:32:56.940469 kernel: raid6: int64x2 gen() 6153 MB/s Sep 13 01:32:56.960468 kernel: raid6: int64x2 xor() 3321 MB/s Sep 13 01:32:56.980468 kernel: raid6: int64x1 gen() 5047 MB/s Sep 13 01:32:57.005833 kernel: raid6: int64x1 xor() 2647 MB/s Sep 13 01:32:57.005847 kernel: raid6: using algorithm neonx8 gen() 13743 MB/s Sep 13 01:32:57.005856 kernel: raid6: .... xor() 10802 MB/s, rmw enabled Sep 13 01:32:57.010204 kernel: raid6: using neon recovery algorithm Sep 13 01:32:57.033623 kernel: xor: measuring software checksum speed Sep 13 01:32:57.033635 kernel: 8regs : 17166 MB/sec Sep 13 01:32:57.037620 kernel: 32regs : 20681 MB/sec Sep 13 01:32:57.041470 kernel: arm64_neon : 27804 MB/sec Sep 13 01:32:57.041480 kernel: xor: using function: arm64_neon (27804 MB/sec) Sep 13 01:32:57.102478 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 01:32:57.112381 systemd[1]: Finished dracut-pre-udev.service. Sep 13 01:32:57.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:57.121000 audit: BPF prog-id=7 op=LOAD Sep 13 01:32:57.121000 audit: BPF prog-id=8 op=LOAD Sep 13 01:32:57.122014 systemd[1]: Starting systemd-udevd.service... Sep 13 01:32:57.137322 systemd-udevd[477]: Using default interface naming scheme 'v252'. Sep 13 01:32:57.144249 systemd[1]: Started systemd-udevd.service. Sep 13 01:32:57.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:57.156456 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 01:32:57.172815 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Sep 13 01:32:57.202818 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 01:32:57.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:57.209015 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:32:57.244989 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:32:57.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:57.298491 kernel: hv_vmbus: Vmbus version:5.3 Sep 13 01:32:57.322498 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 01:32:57.322547 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 01:32:57.322556 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 01:32:57.322565 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 01:32:57.322574 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 13 01:32:57.342404 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 13 01:32:57.356379 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 01:32:57.356594 kernel: scsi host1: storvsc_host_t Sep 13 01:32:57.360005 kernel: scsi host0: storvsc_host_t Sep 13 01:32:57.367172 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 01:32:57.377485 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 13 01:32:57.397992 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 01:32:57.399174 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 01:32:57.399193 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 01:32:57.413442 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 01:32:57.450557 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 01:32:57.450681 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 01:32:57.450762 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 01:32:57.450841 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 01:32:57.450919 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:57.450935 kernel: hv_netvsc 0022487c-d206-0022-487c-d2060022487c eth0: VF slot 1 added Sep 13 01:32:57.451023 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 01:32:57.460489 kernel: hv_vmbus: registering driver hv_pci Sep 13 01:32:57.471488 kernel: hv_pci 06f9c2bb-bd6b-4a96-897a-5fa0bdf52b08: PCI VMBus probing: Using version 0x10004 Sep 13 01:32:57.553740 kernel: hv_pci 06f9c2bb-bd6b-4a96-897a-5fa0bdf52b08: PCI host bridge to bus bd6b:00 Sep 13 01:32:57.553839 kernel: pci_bus bd6b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 13 01:32:57.553935 kernel: pci_bus bd6b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 01:32:57.554009 kernel: pci bd6b:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 13 01:32:57.554144 kernel: pci bd6b:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 01:32:57.554247 kernel: pci bd6b:00:02.0: enabling Extended Tags Sep 13 01:32:57.554333 kernel: pci bd6b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bd6b:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 13 01:32:57.554411 kernel: pci_bus bd6b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 01:32:57.554508 kernel: pci bd6b:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 01:32:57.591990 kernel: mlx5_core bd6b:00:02.0: enabling device (0000 -> 0002) Sep 13 01:32:57.818921 kernel: mlx5_core bd6b:00:02.0: firmware version: 16.30.1284 Sep 13 01:32:57.819042 kernel: mlx5_core bd6b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Sep 13 01:32:57.819125 kernel: hv_netvsc 0022487c-d206-0022-487c-d2060022487c eth0: VF registering: eth1 Sep 13 01:32:57.819206 kernel: mlx5_core bd6b:00:02.0 eth1: joined to eth0 Sep 13 01:32:57.827482 kernel: mlx5_core bd6b:00:02.0 enP48491s1: renamed from eth1 Sep 13 01:32:57.934487 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (544) Sep 13 01:32:57.943431 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 01:32:57.959404 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:32:58.168363 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 01:32:58.177514 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 01:32:58.195869 systemd[1]: Starting disk-uuid.service... Sep 13 01:32:58.209950 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 01:32:59.227484 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:59.228263 disk-uuid[601]: The operation has completed successfully. Sep 13 01:32:59.293701 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 01:32:59.294618 systemd[1]: Finished disk-uuid.service. Sep 13 01:32:59.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.311413 systemd[1]: Starting verity-setup.service... Sep 13 01:32:59.354484 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 01:32:59.791191 systemd[1]: Found device dev-mapper-usr.device. Sep 13 01:32:59.801107 systemd[1]: Finished verity-setup.service. Sep 13 01:32:59.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.806697 systemd[1]: Mounting sysusr-usr.mount... Sep 13 01:32:59.873661 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 01:32:59.874087 systemd[1]: Mounted sysusr-usr.mount. Sep 13 01:32:59.878061 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 01:32:59.878847 systemd[1]: Starting ignition-setup.service... Sep 13 01:32:59.886799 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 01:32:59.933139 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:32:59.933201 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:32:59.937936 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:32:59.976745 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 01:33:00.016512 kernel: kauditd_printk_skb: 10 callbacks suppressed Sep 13 01:33:00.016534 kernel: audit: type=1130 audit(1757727179.981:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.016546 kernel: audit: type=1334 audit(1757727179.986:22): prog-id=9 op=LOAD Sep 13 01:32:59.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.986000 audit: BPF prog-id=9 op=LOAD Sep 13 01:32:59.987319 systemd[1]: Starting systemd-networkd.service... Sep 13 01:33:00.042395 systemd-networkd[868]: lo: Link UP Sep 13 01:33:00.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.042403 systemd-networkd[868]: lo: Gained carrier Sep 13 01:33:00.080204 kernel: audit: type=1130 audit(1757727180.047:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.042870 systemd-networkd[868]: Enumeration completed Sep 13 01:33:00.043221 systemd[1]: Started systemd-networkd.service. Sep 13 01:33:00.048600 systemd[1]: Reached target network.target. Sep 13 01:33:00.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.069756 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:33:00.129811 kernel: audit: type=1130 audit(1757727180.097:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.129835 iscsid[875]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:33:00.129835 iscsid[875]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 01:33:00.129835 iscsid[875]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 01:33:00.129835 iscsid[875]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 01:33:00.129835 iscsid[875]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 01:33:00.129835 iscsid[875]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:33:00.129835 iscsid[875]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 01:33:00.270968 kernel: audit: type=1130 audit(1757727180.133:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.270993 kernel: audit: type=1130 audit(1757727180.209:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.076866 systemd[1]: Starting iscsiuio.service... Sep 13 01:33:00.089590 systemd[1]: Started iscsiuio.service. Sep 13 01:33:00.099054 systemd[1]: Starting iscsid.service... Sep 13 01:33:00.129726 systemd[1]: Started iscsid.service. Sep 13 01:33:00.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.157595 systemd[1]: Starting dracut-initqueue.service... Sep 13 01:33:00.322682 kernel: audit: type=1130 audit(1757727180.297:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.197968 systemd[1]: Finished dracut-initqueue.service. Sep 13 01:33:00.209700 systemd[1]: Reached target remote-fs-pre.target. Sep 13 01:33:00.240289 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:33:00.347211 kernel: mlx5_core bd6b:00:02.0 enP48491s1: Link up Sep 13 01:33:00.347382 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 01:33:00.251348 systemd[1]: Reached target remote-fs.target. Sep 13 01:33:00.271924 systemd[1]: Starting dracut-pre-mount.service... Sep 13 01:33:00.284427 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 01:33:00.293224 systemd[1]: Finished dracut-pre-mount.service. Sep 13 01:33:00.394895 kernel: hv_netvsc 0022487c-d206-0022-487c-d2060022487c eth0: Data path switched to VF: enP48491s1 Sep 13 01:33:00.395068 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 01:33:00.395452 systemd-networkd[868]: enP48491s1: Link UP Sep 13 01:33:00.395549 systemd-networkd[868]: eth0: Link UP Sep 13 01:33:00.395645 systemd-networkd[868]: eth0: Gained carrier Sep 13 01:33:00.408583 systemd-networkd[868]: enP48491s1: Gained carrier Sep 13 01:33:00.418539 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:33:00.662443 systemd[1]: Finished ignition-setup.service. Sep 13 01:33:00.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.692480 kernel: audit: type=1130 audit(1757727180.666:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:00.688018 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 01:33:01.642564 systemd-networkd[868]: eth0: Gained IPv6LL Sep 13 01:33:04.102371 ignition[895]: Ignition 2.14.0 Sep 13 01:33:04.102388 ignition[895]: Stage: fetch-offline Sep 13 01:33:04.102454 ignition[895]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:04.102498 ignition[895]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:04.208639 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:04.208786 ignition[895]: parsed url from cmdline: "" Sep 13 01:33:04.208791 ignition[895]: no config URL provided Sep 13 01:33:04.208799 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:33:04.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.215903 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 01:33:04.268826 kernel: audit: type=1130 audit(1757727184.224:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.208807 ignition[895]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:33:04.249931 systemd[1]: Starting ignition-fetch.service... Sep 13 01:33:04.208813 ignition[895]: failed to fetch config: resource requires networking Sep 13 01:33:04.209094 ignition[895]: Ignition finished successfully Sep 13 01:33:04.266224 ignition[901]: Ignition 2.14.0 Sep 13 01:33:04.266231 ignition[901]: Stage: fetch Sep 13 01:33:04.266352 ignition[901]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:04.266374 ignition[901]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:04.279052 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:04.279305 ignition[901]: parsed url from cmdline: "" Sep 13 01:33:04.279309 ignition[901]: no config URL provided Sep 13 01:33:04.279317 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:33:04.279327 ignition[901]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:33:04.279370 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 01:33:04.390181 ignition[901]: GET result: OK Sep 13 01:33:04.390265 ignition[901]: config has been read from IMDS userdata Sep 13 01:33:04.394303 unknown[901]: fetched base config from "system" Sep 13 01:33:04.433888 kernel: audit: type=1130 audit(1757727184.404:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.390310 ignition[901]: parsing config with SHA512: 2668a5fb8471ca01bf55e673926f20a9a37786700d824c195fffa3fa4bab657c5f22044dbac48e6c6535a765953d4b2a2b4df6a714b5f710225dcc076db36452 Sep 13 01:33:04.394313 unknown[901]: fetched base config from "system" Sep 13 01:33:04.395019 ignition[901]: fetch: fetch complete Sep 13 01:33:04.394318 unknown[901]: fetched user config from "azure" Sep 13 01:33:04.395030 ignition[901]: fetch: fetch passed Sep 13 01:33:04.399734 systemd[1]: Finished ignition-fetch.service. Sep 13 01:33:04.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.395093 ignition[901]: Ignition finished successfully Sep 13 01:33:04.406051 systemd[1]: Starting ignition-kargs.service... Sep 13 01:33:04.440892 ignition[907]: Ignition 2.14.0 Sep 13 01:33:04.452453 systemd[1]: Finished ignition-kargs.service. Sep 13 01:33:04.440899 ignition[907]: Stage: kargs Sep 13 01:33:04.458479 systemd[1]: Starting ignition-disks.service... Sep 13 01:33:04.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.441041 ignition[907]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:04.489612 systemd[1]: Finished ignition-disks.service. Sep 13 01:33:04.441060 ignition[907]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:04.496888 systemd[1]: Reached target initrd-root-device.target. Sep 13 01:33:04.445696 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:04.506667 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:33:04.447816 ignition[907]: kargs: kargs passed Sep 13 01:33:04.518266 systemd[1]: Reached target local-fs.target. Sep 13 01:33:04.447911 ignition[907]: Ignition finished successfully Sep 13 01:33:04.527336 systemd[1]: Reached target sysinit.target. Sep 13 01:33:04.477287 ignition[913]: Ignition 2.14.0 Sep 13 01:33:04.537964 systemd[1]: Reached target basic.target. Sep 13 01:33:04.477294 ignition[913]: Stage: disks Sep 13 01:33:04.550254 systemd[1]: Starting systemd-fsck-root.service... Sep 13 01:33:04.477412 ignition[913]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:04.477430 ignition[913]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:04.483264 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:04.488656 ignition[913]: disks: disks passed Sep 13 01:33:04.488726 ignition[913]: Ignition finished successfully Sep 13 01:33:04.623930 systemd-fsck[921]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Sep 13 01:33:04.634813 systemd[1]: Finished systemd-fsck-root.service. Sep 13 01:33:04.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.640254 systemd[1]: Mounting sysroot.mount... Sep 13 01:33:04.676339 systemd[1]: Mounted sysroot.mount. Sep 13 01:33:04.683668 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 01:33:04.680329 systemd[1]: Reached target initrd-root-fs.target. Sep 13 01:33:04.718232 systemd[1]: Mounting sysroot-usr.mount... Sep 13 01:33:04.723512 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 01:33:04.736802 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 01:33:04.736850 systemd[1]: Reached target ignition-diskful.target. Sep 13 01:33:04.753621 systemd[1]: Mounted sysroot-usr.mount. Sep 13 01:33:04.816233 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:33:04.821695 systemd[1]: Starting initrd-setup-root.service... Sep 13 01:33:04.853484 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (932) Sep 13 01:33:04.861526 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 01:33:04.873448 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:33:04.873500 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:33:04.873511 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:33:04.887923 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:33:04.908217 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory Sep 13 01:33:04.945710 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 01:33:04.970860 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 01:33:05.798132 systemd[1]: Finished initrd-setup-root.service. Sep 13 01:33:05.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.803871 systemd[1]: Starting ignition-mount.service... Sep 13 01:33:05.842319 kernel: kauditd_printk_skb: 3 callbacks suppressed Sep 13 01:33:05.842345 kernel: audit: type=1130 audit(1757727185.802:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.836378 systemd[1]: Starting sysroot-boot.service... Sep 13 01:33:05.847645 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 01:33:05.847756 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 01:33:05.884665 systemd[1]: Finished sysroot-boot.service. Sep 13 01:33:05.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.911485 kernel: audit: type=1130 audit(1757727185.888:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.918716 ignition[1002]: INFO : Ignition 2.14.0 Sep 13 01:33:05.922885 ignition[1002]: INFO : Stage: mount Sep 13 01:33:05.922885 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:05.922885 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:05.947332 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:05.947332 ignition[1002]: INFO : mount: mount passed Sep 13 01:33:05.947332 ignition[1002]: INFO : Ignition finished successfully Sep 13 01:33:05.985463 kernel: audit: type=1130 audit(1757727185.951:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.927965 systemd[1]: Finished ignition-mount.service. Sep 13 01:33:06.337682 coreos-metadata[931]: Sep 13 01:33:06.337 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 01:33:06.347991 coreos-metadata[931]: Sep 13 01:33:06.347 INFO Fetch successful Sep 13 01:33:06.383195 coreos-metadata[931]: Sep 13 01:33:06.383 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 01:33:06.395854 coreos-metadata[931]: Sep 13 01:33:06.395 INFO Fetch successful Sep 13 01:33:06.413384 coreos-metadata[931]: Sep 13 01:33:06.413 INFO wrote hostname ci-3510.3.8-n-8d5f1b2fe1 to /sysroot/etc/hostname Sep 13 01:33:06.422162 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 01:33:06.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:06.449763 systemd[1]: Starting ignition-files.service... Sep 13 01:33:06.459681 kernel: audit: type=1130 audit(1757727186.427:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:06.460652 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:33:06.488484 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1010) Sep 13 01:33:06.502873 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:33:06.502920 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:33:06.502929 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:33:06.516293 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:33:06.530706 ignition[1029]: INFO : Ignition 2.14.0 Sep 13 01:33:06.530706 ignition[1029]: INFO : Stage: files Sep 13 01:33:06.540202 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:06.540202 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:06.540202 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:06.540202 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 13 01:33:06.575491 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 01:33:06.575491 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 01:33:06.639033 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 01:33:06.646689 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 01:33:06.668190 unknown[1029]: wrote ssh authorized keys file for user: core Sep 13 01:33:06.673686 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 01:33:06.683777 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 01:33:06.694846 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 01:33:06.694846 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 01:33:06.694846 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 01:33:06.742055 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 01:33:06.818901 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 01:33:06.865349 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:33:06.876287 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 01:33:07.083030 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 13 01:33:07.156340 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:33:07.166587 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 13 01:33:07.166587 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 01:33:07.166587 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:33:07.166587 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:33:07.166587 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:33:07.166587 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:33:07.166587 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:33:07.166587 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1980082879" Sep 13 01:33:07.249688 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1980082879": device or resource busy Sep 13 01:33:07.249688 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1980082879", trying btrfs: device or resource busy Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1980082879" Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1980082879" Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem1980082879" Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem1980082879" Sep 13 01:33:07.249688 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 01:33:07.230115 systemd[1]: mnt-oem1980082879.mount: Deactivated successfully. Sep 13 01:33:07.418532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 01:33:07.418532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:33:07.418532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem993240877" Sep 13 01:33:07.418532 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem993240877": device or resource busy Sep 13 01:33:07.418532 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem993240877", trying btrfs: device or resource busy Sep 13 01:33:07.418532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem993240877" Sep 13 01:33:07.418532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem993240877" Sep 13 01:33:07.418532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem993240877" Sep 13 01:33:07.418532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem993240877" Sep 13 01:33:07.418532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 01:33:07.418532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:07.418532 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 01:33:07.748049 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET result: OK Sep 13 01:33:07.986789 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:07.986789 ignition[1029]: INFO : files: op(15): [started] processing unit "waagent.service" Sep 13 01:33:07.986789 ignition[1029]: INFO : files: op(15): [finished] processing unit "waagent.service" Sep 13 01:33:07.986789 ignition[1029]: INFO : files: op(16): [started] processing unit "nvidia.service" Sep 13 01:33:08.047590 kernel: audit: type=1130 audit(1757727188.010:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.000737 systemd[1]: Finished ignition-files.service. Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(16): [finished] processing unit "nvidia.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(17): [started] processing unit "containerd.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(17): [finished] processing unit "containerd.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(19): [started] processing unit "prepare-helm.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(19): op(1a): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(19): [finished] processing unit "prepare-helm.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:33:08.052702 ignition[1029]: INFO : files: files passed Sep 13 01:33:08.052702 ignition[1029]: INFO : Ignition finished successfully Sep 13 01:33:08.366684 kernel: audit: type=1130 audit(1757727188.076:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.366716 kernel: audit: type=1131 audit(1757727188.076:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.366729 kernel: audit: type=1130 audit(1757727188.133:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.366742 kernel: audit: type=1130 audit(1757727188.227:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.366752 kernel: audit: type=1131 audit(1757727188.253:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.014210 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 01:33:08.376430 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:33:08.039818 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 01:33:08.041054 systemd[1]: Starting ignition-quench.service... Sep 13 01:33:08.057778 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 01:33:08.057908 systemd[1]: Finished ignition-quench.service. Sep 13 01:33:08.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.076844 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 01:33:08.133754 systemd[1]: Reached target ignition-complete.target. Sep 13 01:33:08.179311 systemd[1]: Starting initrd-parse-etc.service... Sep 13 01:33:08.216642 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 01:33:08.216813 systemd[1]: Finished initrd-parse-etc.service. Sep 13 01:33:08.254427 systemd[1]: Reached target initrd-fs.target. Sep 13 01:33:08.284221 systemd[1]: Reached target initrd.target. Sep 13 01:33:08.296640 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 01:33:08.304762 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 01:33:08.351546 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 01:33:08.360898 systemd[1]: Starting initrd-cleanup.service... Sep 13 01:33:08.378848 systemd[1]: Stopped target nss-lookup.target. Sep 13 01:33:08.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.389550 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 01:33:08.405446 systemd[1]: Stopped target timers.target. Sep 13 01:33:08.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.414245 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 01:33:08.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.414312 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 01:33:08.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.424533 systemd[1]: Stopped target initrd.target. Sep 13 01:33:08.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.432724 systemd[1]: Stopped target basic.target. Sep 13 01:33:08.442360 systemd[1]: Stopped target ignition-complete.target. Sep 13 01:33:08.619823 iscsid[875]: iscsid shutting down. Sep 13 01:33:08.452381 systemd[1]: Stopped target ignition-diskful.target. Sep 13 01:33:08.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.639858 ignition[1067]: INFO : Ignition 2.14.0 Sep 13 01:33:08.639858 ignition[1067]: INFO : Stage: umount Sep 13 01:33:08.639858 ignition[1067]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:08.639858 ignition[1067]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:08.639858 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:08.639858 ignition[1067]: INFO : umount: umount passed Sep 13 01:33:08.639858 ignition[1067]: INFO : Ignition finished successfully Sep 13 01:33:08.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.464507 systemd[1]: Stopped target initrd-root-device.target. Sep 13 01:33:08.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.474787 systemd[1]: Stopped target remote-fs.target. Sep 13 01:33:08.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.483003 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 01:33:08.491850 systemd[1]: Stopped target sysinit.target. Sep 13 01:33:08.500412 systemd[1]: Stopped target local-fs.target. Sep 13 01:33:08.512322 systemd[1]: Stopped target local-fs-pre.target. Sep 13 01:33:08.521947 systemd[1]: Stopped target swap.target. Sep 13 01:33:08.530945 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 01:33:08.531016 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 01:33:08.540618 systemd[1]: Stopped target cryptsetup.target. Sep 13 01:33:08.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.550060 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 01:33:08.550112 systemd[1]: Stopped dracut-initqueue.service. Sep 13 01:33:08.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.559821 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 01:33:08.559863 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 01:33:08.569850 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 01:33:08.569889 systemd[1]: Stopped ignition-files.service. Sep 13 01:33:08.580620 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 01:33:08.580660 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 01:33:08.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.591412 systemd[1]: Stopping ignition-mount.service... Sep 13 01:33:08.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.599554 systemd[1]: Stopping iscsid.service... Sep 13 01:33:08.884000 audit: BPF prog-id=6 op=UNLOAD Sep 13 01:33:08.608523 systemd[1]: Stopping sysroot-boot.service... Sep 13 01:33:08.623740 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 01:33:08.623829 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 01:33:08.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.635500 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 01:33:08.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.635568 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 01:33:08.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.645981 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 01:33:08.646101 systemd[1]: Stopped iscsid.service. Sep 13 01:33:08.653744 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 01:33:08.653832 systemd[1]: Finished initrd-cleanup.service. Sep 13 01:33:08.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.662096 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 01:33:08.662185 systemd[1]: Stopped ignition-mount.service. Sep 13 01:33:08.674657 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 01:33:09.004242 kernel: hv_netvsc 0022487c-d206-0022-487c-d2060022487c eth0: Data path switched from VF: enP48491s1 Sep 13 01:33:08.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.674714 systemd[1]: Stopped ignition-disks.service. Sep 13 01:33:08.700146 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 01:33:09.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.700199 systemd[1]: Stopped ignition-kargs.service. Sep 13 01:33:09.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.709792 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 01:33:08.709837 systemd[1]: Stopped ignition-fetch.service. Sep 13 01:33:08.720513 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 01:33:08.720556 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 01:33:09.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.730894 systemd[1]: Stopped target paths.target. Sep 13 01:33:08.739419 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 01:33:09.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.749489 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 01:33:09.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.755555 systemd[1]: Stopped target slices.target. Sep 13 01:33:09.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.768777 systemd[1]: Stopped target sockets.target. Sep 13 01:33:09.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.776798 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 01:33:08.776855 systemd[1]: Closed iscsid.socket. Sep 13 01:33:08.785687 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 01:33:08.785733 systemd[1]: Stopped ignition-setup.service. Sep 13 01:33:08.795440 systemd[1]: Stopping iscsiuio.service... Sep 13 01:33:08.805516 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 01:33:08.805975 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 01:33:08.806082 systemd[1]: Stopped iscsiuio.service. Sep 13 01:33:08.814618 systemd[1]: Stopped target network.target. Sep 13 01:33:08.822889 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 01:33:08.822928 systemd[1]: Closed iscsiuio.socket. Sep 13 01:33:09.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.832852 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:33:08.842045 systemd[1]: Stopping systemd-resolved.service... Sep 13 01:33:09.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.852274 systemd-networkd[868]: eth0: DHCPv6 lease lost Sep 13 01:33:09.177000 audit: BPF prog-id=9 op=UNLOAD Sep 13 01:33:08.857411 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:33:08.857727 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:33:08.869978 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 01:33:08.870085 systemd[1]: Stopped systemd-resolved.service. Sep 13 01:33:08.880315 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 01:33:08.880360 systemd[1]: Closed systemd-networkd.socket. Sep 13 01:33:08.892586 systemd[1]: Stopping network-cleanup.service... Sep 13 01:33:08.901611 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 01:33:08.901697 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 01:33:08.914363 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:33:08.914428 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:33:08.928846 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 01:33:08.928901 systemd[1]: Stopped systemd-modules-load.service. Sep 13 01:33:08.934022 systemd[1]: Stopping systemd-udevd.service... Sep 13 01:33:08.944383 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 01:33:08.952567 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 01:33:09.275000 audit: BPF prog-id=8 op=UNLOAD Sep 13 01:33:09.275000 audit: BPF prog-id=7 op=UNLOAD Sep 13 01:33:09.277000 audit: BPF prog-id=5 op=UNLOAD Sep 13 01:33:09.277000 audit: BPF prog-id=4 op=UNLOAD Sep 13 01:33:09.277000 audit: BPF prog-id=3 op=UNLOAD Sep 13 01:33:08.952738 systemd[1]: Stopped systemd-udevd.service. Sep 13 01:33:08.962155 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 01:33:08.962196 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 01:33:08.970817 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 01:33:08.970901 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 01:33:09.313569 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Sep 13 01:33:08.980778 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 01:33:08.980837 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 01:33:08.999037 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 01:33:08.999091 systemd[1]: Stopped dracut-cmdline.service. Sep 13 01:33:09.012877 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 01:33:09.012931 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 01:33:09.026983 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 01:33:09.041800 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 01:33:09.041892 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 01:33:09.058057 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 01:33:09.058344 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 01:33:09.066700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 01:33:09.066750 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 01:33:09.078670 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 01:33:09.079155 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 01:33:09.079248 systemd[1]: Stopped network-cleanup.service. Sep 13 01:33:09.088687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 01:33:09.088772 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 01:33:09.149000 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 01:33:09.149119 systemd[1]: Stopped sysroot-boot.service. Sep 13 01:33:09.156215 systemd[1]: Reached target initrd-switch-root.target. Sep 13 01:33:09.164797 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 01:33:09.164856 systemd[1]: Stopped initrd-setup-root.service. Sep 13 01:33:09.174600 systemd[1]: Starting initrd-switch-root.service... Sep 13 01:33:09.273887 systemd[1]: Switching root. Sep 13 01:33:09.314207 systemd-journald[276]: Journal stopped Sep 13 01:33:27.228257 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 01:33:27.228278 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 01:33:27.228289 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 01:33:27.228298 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 01:33:27.228306 kernel: SELinux: policy capability open_perms=1 Sep 13 01:33:27.228314 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 01:33:27.228323 kernel: SELinux: policy capability always_check_network=0 Sep 13 01:33:27.228331 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 01:33:27.228340 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 01:33:27.228348 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 01:33:27.228356 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 01:33:27.228366 kernel: kauditd_printk_skb: 44 callbacks suppressed Sep 13 01:33:27.228374 kernel: audit: type=1403 audit(1757727193.379:88): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:33:27.228384 systemd[1]: Successfully loaded SELinux policy in 331.963ms. Sep 13 01:33:27.228395 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.600ms. Sep 13 01:33:27.228407 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:33:27.228417 systemd[1]: Detected virtualization microsoft. Sep 13 01:33:27.228427 systemd[1]: Detected architecture arm64. Sep 13 01:33:27.228436 systemd[1]: Detected first boot. Sep 13 01:33:27.228445 systemd[1]: Hostname set to . Sep 13 01:33:27.228454 systemd[1]: Initializing machine ID from random generator. Sep 13 01:33:27.228474 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 01:33:27.228486 kernel: audit: type=1400 audit(1757727196.058:89): avc: denied { associate } for pid=1117 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 01:33:27.228497 kernel: audit: type=1300 audit(1757727196.058:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001472a4 a1=40000c85b8 a2=40000ce7c0 a3=32 items=0 ppid=1100 pid=1117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:27.228506 kernel: audit: type=1327 audit(1757727196.058:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:33:27.228516 kernel: audit: type=1400 audit(1757727196.072:90): avc: denied { associate } for pid=1117 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 01:33:27.228525 kernel: audit: type=1300 audit(1757727196.072:90): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147379 a2=1ed a3=0 items=2 ppid=1100 pid=1117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:27.228535 kernel: audit: type=1307 audit(1757727196.072:90): cwd="/" Sep 13 01:33:27.228545 kernel: audit: type=1302 audit(1757727196.072:90): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.228554 kernel: audit: type=1302 audit(1757727196.072:90): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:27.228564 kernel: audit: type=1327 audit(1757727196.072:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:33:27.228573 systemd[1]: Populated /etc with preset unit settings. Sep 13 01:33:27.228582 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:33:27.228592 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:33:27.228603 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:33:27.228613 systemd[1]: Queued start job for default target multi-user.target. Sep 13 01:33:27.228621 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 13 01:33:27.228631 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 01:33:27.228641 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 01:33:27.228650 systemd[1]: Created slice system-getty.slice. Sep 13 01:33:27.228662 systemd[1]: Created slice system-modprobe.slice. Sep 13 01:33:27.228672 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 01:33:27.228682 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 01:33:27.228691 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 01:33:27.228700 systemd[1]: Created slice user.slice. Sep 13 01:33:27.228709 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:33:27.228718 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 01:33:27.228728 systemd[1]: Set up automount boot.automount. Sep 13 01:33:27.228737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 01:33:27.228746 systemd[1]: Reached target integritysetup.target. Sep 13 01:33:27.228757 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:33:27.228766 systemd[1]: Reached target remote-fs.target. Sep 13 01:33:27.228775 systemd[1]: Reached target slices.target. Sep 13 01:33:27.228784 systemd[1]: Reached target swap.target. Sep 13 01:33:27.228793 systemd[1]: Reached target torcx.target. Sep 13 01:33:27.228803 systemd[1]: Reached target veritysetup.target. Sep 13 01:33:27.228812 systemd[1]: Listening on systemd-coredump.socket. Sep 13 01:33:27.228821 systemd[1]: Listening on systemd-initctl.socket. Sep 13 01:33:27.228832 kernel: audit: type=1400 audit(1757727206.724:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:33:27.228842 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 01:33:27.228851 kernel: audit: type=1335 audit(1757727206.729:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 01:33:27.228860 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 01:33:27.228869 systemd[1]: Listening on systemd-journald.socket. Sep 13 01:33:27.228878 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:33:27.228888 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:33:27.228900 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:33:27.228909 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 01:33:27.228918 systemd[1]: Mounting dev-hugepages.mount... Sep 13 01:33:27.228927 systemd[1]: Mounting dev-mqueue.mount... Sep 13 01:33:27.228937 systemd[1]: Mounting media.mount... Sep 13 01:33:27.228946 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 01:33:27.228957 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 01:33:27.228966 systemd[1]: Mounting tmp.mount... Sep 13 01:33:27.228976 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 01:33:27.228986 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:27.228995 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:33:27.229004 systemd[1]: Starting modprobe@configfs.service... Sep 13 01:33:27.229014 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:27.229024 systemd[1]: Starting modprobe@drm.service... Sep 13 01:33:27.229033 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:27.229043 systemd[1]: Starting modprobe@fuse.service... Sep 13 01:33:27.229054 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:27.229063 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 01:33:27.229073 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 01:33:27.229082 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 01:33:27.229092 systemd[1]: Starting systemd-journald.service... Sep 13 01:33:27.229101 kernel: loop: module loaded Sep 13 01:33:27.229110 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:33:27.229120 systemd[1]: Starting systemd-network-generator.service... Sep 13 01:33:27.229130 kernel: fuse: init (API version 7.34) Sep 13 01:33:27.229138 systemd[1]: Starting systemd-remount-fs.service... Sep 13 01:33:27.229148 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:33:27.229158 systemd[1]: Mounted dev-hugepages.mount. Sep 13 01:33:27.229167 systemd[1]: Mounted dev-mqueue.mount. Sep 13 01:33:27.229176 systemd[1]: Mounted media.mount. Sep 13 01:33:27.229185 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 01:33:27.229195 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 01:33:27.229204 systemd[1]: Mounted tmp.mount. Sep 13 01:33:27.229215 kernel: audit: type=1305 audit(1757727207.210:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 01:33:27.229224 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 01:33:27.229237 systemd-journald[1230]: Journal started Sep 13 01:33:27.229277 systemd-journald[1230]: Runtime Journal (/run/log/journal/3901b0a15b844450ba7cf9ee7a987990) is 8.0M, max 78.5M, 70.5M free. Sep 13 01:33:26.729000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 01:33:27.210000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 01:33:27.266939 kernel: audit: type=1300 audit(1757727207.210:93): arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd9de7130 a2=4000 a3=1 items=0 ppid=1 pid=1230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:27.266997 systemd[1]: Started systemd-journald.service. Sep 13 01:33:27.210000 audit[1230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd9de7130 a2=4000 a3=1 items=0 ppid=1 pid=1230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:27.210000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 01:33:27.272491 kernel: audit: type=1327 audit(1757727207.210:93): proctitle="/usr/lib/systemd/systemd-journald" Sep 13 01:33:27.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.304994 kernel: audit: type=1130 audit(1757727207.263:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.305434 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:33:27.325274 kernel: audit: type=1130 audit(1757727207.304:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.330792 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 01:33:27.331048 systemd[1]: Finished modprobe@configfs.service. Sep 13 01:33:27.355480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:27.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.362683 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:27.393292 kernel: audit: type=1130 audit(1757727207.329:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.393366 kernel: audit: type=1130 audit(1757727207.354:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.393386 kernel: audit: type=1131 audit(1757727207.354:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.404323 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:33:27.404517 systemd[1]: Finished modprobe@drm.service. Sep 13 01:33:27.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.409649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:27.409810 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:27.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.416445 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 01:33:27.416749 systemd[1]: Finished modprobe@fuse.service. Sep 13 01:33:27.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.422294 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:27.422480 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:27.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.427795 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:33:27.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.433182 systemd[1]: Finished systemd-network-generator.service. Sep 13 01:33:27.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.439383 systemd[1]: Finished systemd-remount-fs.service. Sep 13 01:33:27.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.445062 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:33:27.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.450692 systemd[1]: Reached target network-pre.target. Sep 13 01:33:27.457046 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 01:33:27.463323 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 01:33:27.467662 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 01:33:27.500714 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 01:33:27.506577 systemd[1]: Starting systemd-journal-flush.service... Sep 13 01:33:27.511222 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:27.512495 systemd[1]: Starting systemd-random-seed.service... Sep 13 01:33:27.517521 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:27.518725 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:33:27.524416 systemd[1]: Starting systemd-sysusers.service... Sep 13 01:33:27.530181 systemd[1]: Starting systemd-udev-settle.service... Sep 13 01:33:27.537780 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 01:33:27.543115 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 01:33:27.552618 udevadm[1267]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 01:33:27.594010 systemd[1]: Finished systemd-random-seed.service. Sep 13 01:33:27.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.599697 systemd[1]: Reached target first-boot-complete.target. Sep 13 01:33:27.603262 systemd-journald[1230]: Time spent on flushing to /var/log/journal/3901b0a15b844450ba7cf9ee7a987990 is 13.387ms for 1046 entries. Sep 13 01:33:27.603262 systemd-journald[1230]: System Journal (/var/log/journal/3901b0a15b844450ba7cf9ee7a987990) is 8.0M, max 2.6G, 2.6G free. Sep 13 01:33:27.681815 systemd-journald[1230]: Received client request to flush runtime journal. Sep 13 01:33:27.682991 systemd[1]: Finished systemd-journal-flush.service. Sep 13 01:33:27.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.800240 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:33:27.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.445592 systemd[1]: Finished systemd-sysusers.service. Sep 13 01:33:28.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.452401 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:33:29.255860 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 01:33:29.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.382258 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:33:29.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.389530 systemd[1]: Starting systemd-udevd.service... Sep 13 01:33:29.408943 systemd-udevd[1278]: Using default interface naming scheme 'v252'. Sep 13 01:33:30.600487 systemd[1]: Started systemd-udevd.service. Sep 13 01:33:30.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.622338 systemd[1]: Starting systemd-networkd.service... Sep 13 01:33:30.649587 systemd[1]: Found device dev-ttyAMA0.device. Sep 13 01:33:30.730673 systemd[1]: Starting systemd-userdbd.service... Sep 13 01:33:30.735000 audit[1284]: AVC avc: denied { confidentiality } for pid=1284 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 01:33:30.753627 kernel: hv_vmbus: registering driver hv_balloon Sep 13 01:33:30.753715 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 13 01:33:30.753731 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 13 01:33:30.735000 audit[1284]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab0097b040 a1=aa2c a2=ffff954524b0 a3=aaab008da010 items=12 ppid=1278 pid=1284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:30.735000 audit: CWD cwd="/" Sep 13 01:33:30.735000 audit: PATH item=0 name=(null) inode=7289 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PATH item=1 name=(null) inode=10651 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PATH item=2 name=(null) inode=10651 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PATH item=3 name=(null) inode=10652 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PATH item=4 name=(null) inode=10651 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PATH item=5 name=(null) inode=10653 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PATH item=6 name=(null) inode=10651 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PATH item=7 name=(null) inode=10654 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PATH item=8 name=(null) inode=10651 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PATH item=9 name=(null) inode=10655 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PATH item=10 name=(null) inode=10651 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PATH item=11 name=(null) inode=10656 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:30.735000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 01:33:30.777347 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 01:33:30.777433 kernel: hv_vmbus: registering driver hyperv_fb Sep 13 01:33:30.793846 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 13 01:33:30.793932 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 13 01:33:30.801660 kernel: Console: switching to colour dummy device 80x25 Sep 13 01:33:30.809644 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 01:33:30.816213 systemd[1]: Started systemd-userdbd.service. Sep 13 01:33:30.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.850600 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 01:33:30.850758 kernel: hv_vmbus: registering driver hv_utils Sep 13 01:33:30.860078 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 01:33:30.860183 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 01:33:30.860205 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 01:33:30.595178 systemd[1]: Finished systemd-udev-settle.service. Sep 13 01:33:30.665796 systemd-journald[1230]: Time jumped backwards, rotating. Sep 13 01:33:30.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.613027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:33:30.619817 systemd[1]: Starting lvm2-activation-early.service... Sep 13 01:33:30.976763 lvm[1355]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:33:31.039349 systemd[1]: Finished lvm2-activation-early.service. Sep 13 01:33:31.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.045158 systemd[1]: Reached target cryptsetup.target. Sep 13 01:33:31.051517 systemd[1]: Starting lvm2-activation.service... Sep 13 01:33:31.055703 lvm[1358]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:33:31.077432 systemd[1]: Finished lvm2-activation.service. Sep 13 01:33:31.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.083502 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:33:31.089524 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 01:33:31.089553 systemd[1]: Reached target local-fs.target. Sep 13 01:33:31.094248 systemd[1]: Reached target machines.target. Sep 13 01:33:31.101058 systemd[1]: Starting ldconfig.service... Sep 13 01:33:31.124738 systemd-networkd[1299]: lo: Link UP Sep 13 01:33:31.124745 systemd-networkd[1299]: lo: Gained carrier Sep 13 01:33:31.125167 systemd-networkd[1299]: Enumeration completed Sep 13 01:33:31.134629 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:31.134707 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:31.135969 systemd[1]: Starting systemd-boot-update.service... Sep 13 01:33:31.141751 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 01:33:31.148658 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 01:33:31.155434 systemd[1]: Starting systemd-sysext.service... Sep 13 01:33:31.158508 systemd-networkd[1299]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:33:31.159767 systemd[1]: Started systemd-networkd.service. Sep 13 01:33:31.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.166340 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:33:31.226843 kernel: mlx5_core bd6b:00:02.0 enP48491s1: Link up Sep 13 01:33:31.451926 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 01:33:31.451987 kernel: hv_netvsc 0022487c-d206-0022-487c-d2060022487c eth0: Data path switched to VF: enP48491s1 Sep 13 01:33:31.226024 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1361 (bootctl) Sep 13 01:33:31.227404 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 01:33:31.253064 systemd-networkd[1299]: enP48491s1: Link UP Sep 13 01:33:31.253158 systemd-networkd[1299]: eth0: Link UP Sep 13 01:33:31.253161 systemd-networkd[1299]: eth0: Gained carrier Sep 13 01:33:31.257670 systemd-networkd[1299]: enP48491s1: Gained carrier Sep 13 01:33:31.265508 systemd-networkd[1299]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:33:31.478245 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 01:33:31.510873 kernel: kauditd_printk_skb: 42 callbacks suppressed Sep 13 01:33:31.510944 kernel: audit: type=1130 audit(1757727211.483:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.491230 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 01:33:31.512753 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 01:33:31.513037 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 01:33:31.575418 kernel: loop0: detected capacity change from 0 to 203944 Sep 13 01:33:31.594318 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 01:33:31.595022 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 01:33:31.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.620496 kernel: audit: type=1130 audit(1757727211.599:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.671437 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 01:33:31.697413 kernel: loop1: detected capacity change from 0 to 203944 Sep 13 01:33:31.712204 (sd-sysext)[1378]: Using extensions 'kubernetes'. Sep 13 01:33:31.712575 (sd-sysext)[1378]: Merged extensions into '/usr'. Sep 13 01:33:31.729790 systemd[1]: Mounting usr-share-oem.mount... Sep 13 01:33:31.733748 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:31.735071 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:31.740763 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:31.751349 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:31.758241 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:31.758403 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:31.763080 systemd-fsck[1374]: fsck.fat 4.2 (2021-01-31) Sep 13 01:33:31.763080 systemd-fsck[1374]: /dev/sda1: 236 files, 117310/258078 clusters Sep 13 01:33:31.763900 systemd[1]: Mounted usr-share-oem.mount. Sep 13 01:33:31.769916 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:31.770095 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:31.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.776924 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 01:33:31.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.819819 kernel: audit: type=1130 audit(1757727211.775:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.819879 kernel: audit: type=1131 audit(1757727211.775:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.819995 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:31.820256 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:31.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.842639 kernel: audit: type=1130 audit(1757727211.794:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.849033 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:31.849355 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:31.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.892477 kernel: audit: type=1130 audit(1757727211.847:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.892560 kernel: audit: type=1131 audit(1757727211.847:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.895698 systemd[1]: Mounting boot.mount... Sep 13 01:33:31.913050 kernel: audit: type=1130 audit(1757727211.890:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.937046 kernel: audit: type=1131 audit(1757727211.890:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.940876 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:31.941056 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:31.941685 systemd[1]: Finished systemd-sysext.service. Sep 13 01:33:31.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.949460 systemd[1]: Starting ensure-sysext.service... Sep 13 01:33:31.970410 kernel: audit: type=1130 audit(1757727211.945:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.976208 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 01:33:31.990284 systemd[1]: Mounted boot.mount. Sep 13 01:33:31.995058 systemd[1]: Reloading. Sep 13 01:33:32.004133 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 01:33:32.038752 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 01:33:32.057597 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 01:33:32.059266 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2025-09-13T01:33:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:33:32.062481 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2025-09-13T01:33:32Z" level=info msg="torcx already run" Sep 13 01:33:32.150830 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:33:32.150851 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:33:32.168988 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:33:32.240123 systemd[1]: Finished systemd-boot-update.service. Sep 13 01:33:32.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.253989 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:32.255631 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:32.265334 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:32.272570 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:32.277123 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:32.277262 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:32.278155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:32.278346 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:32.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.283866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:32.284034 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:32.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.290139 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:32.290362 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:32.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.297195 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:32.298710 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:32.304876 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:32.310839 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:32.315366 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:32.315529 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:32.316353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:32.316557 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:32.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.322796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:32.322966 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:32.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.329021 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:32.329245 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:32.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.336871 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:32.338310 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:32.344763 systemd[1]: Starting modprobe@drm.service... Sep 13 01:33:32.350583 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:32.357615 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:32.362023 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:32.362163 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:32.363190 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:32.363373 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:32.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.369574 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:33:32.369735 systemd[1]: Finished modprobe@drm.service. Sep 13 01:33:32.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.375333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:32.375516 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:32.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.381775 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:32.381998 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:32.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:32.387682 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:32.387757 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:32.388902 systemd[1]: Finished ensure-sysext.service. Sep 13 01:33:32.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:33.282557 systemd-networkd[1299]: eth0: Gained IPv6LL Sep 13 01:33:33.285428 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:33:33.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:35.302315 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 01:33:35.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:35.310835 systemd[1]: Starting audit-rules.service... Sep 13 01:33:35.316504 systemd[1]: Starting clean-ca-certificates.service... Sep 13 01:33:35.322933 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 01:33:35.330959 systemd[1]: Starting systemd-resolved.service... Sep 13 01:33:35.338095 systemd[1]: Starting systemd-timesyncd.service... Sep 13 01:33:35.345131 systemd[1]: Starting systemd-update-utmp.service... Sep 13 01:33:35.351257 systemd[1]: Finished clean-ca-certificates.service. Sep 13 01:33:35.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:35.357299 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 01:33:35.399000 audit[1520]: SYSTEM_BOOT pid=1520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 01:33:35.404109 systemd[1]: Finished systemd-update-utmp.service. Sep 13 01:33:35.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:35.494548 systemd[1]: Started systemd-timesyncd.service. Sep 13 01:33:35.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:35.500366 systemd[1]: Reached target time-set.target. Sep 13 01:33:35.562415 systemd-resolved[1517]: Positive Trust Anchors: Sep 13 01:33:35.562788 systemd-resolved[1517]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:33:35.562865 systemd-resolved[1517]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:33:35.647045 systemd-resolved[1517]: Using system hostname 'ci-3510.3.8-n-8d5f1b2fe1'. Sep 13 01:33:35.648676 systemd[1]: Started systemd-resolved.service. Sep 13 01:33:35.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:35.654107 systemd[1]: Reached target network.target. Sep 13 01:33:35.658928 systemd[1]: Reached target network-online.target. Sep 13 01:33:35.665420 systemd[1]: Reached target nss-lookup.target. Sep 13 01:33:35.730735 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 01:33:35.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:35.796014 augenrules[1537]: No rules Sep 13 01:33:35.794000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 01:33:35.794000 audit[1537]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe6d246a0 a2=420 a3=0 items=0 ppid=1513 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:35.794000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 01:33:35.797113 systemd[1]: Finished audit-rules.service. Sep 13 01:33:35.839674 systemd-timesyncd[1518]: Contacted time server 104.131.155.175:123 (0.flatcar.pool.ntp.org). Sep 13 01:33:35.839746 systemd-timesyncd[1518]: Initial clock synchronization to Sat 2025-09-13 01:33:35.839131 UTC. Sep 13 01:33:43.581198 ldconfig[1360]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 01:33:43.592609 systemd[1]: Finished ldconfig.service. Sep 13 01:33:43.599672 systemd[1]: Starting systemd-update-done.service... Sep 13 01:33:43.653260 systemd[1]: Finished systemd-update-done.service. Sep 13 01:33:43.658414 systemd[1]: Reached target sysinit.target. Sep 13 01:33:43.663467 systemd[1]: Started motdgen.path. Sep 13 01:33:43.667489 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 01:33:43.674137 systemd[1]: Started logrotate.timer. Sep 13 01:33:43.678463 systemd[1]: Started mdadm.timer. Sep 13 01:33:43.682471 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 01:33:43.687362 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 01:33:43.687410 systemd[1]: Reached target paths.target. Sep 13 01:33:43.691888 systemd[1]: Reached target timers.target. Sep 13 01:33:43.696885 systemd[1]: Listening on dbus.socket. Sep 13 01:33:43.702323 systemd[1]: Starting docker.socket... Sep 13 01:33:43.736795 systemd[1]: Listening on sshd.socket. Sep 13 01:33:43.741485 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:43.741934 systemd[1]: Listening on docker.socket. Sep 13 01:33:43.746517 systemd[1]: Reached target sockets.target. Sep 13 01:33:43.751073 systemd[1]: Reached target basic.target. Sep 13 01:33:43.755685 systemd[1]: System is tainted: cgroupsv1 Sep 13 01:33:43.755740 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:33:43.755765 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:33:43.756960 systemd[1]: Starting containerd.service... Sep 13 01:33:43.762206 systemd[1]: Starting dbus.service... Sep 13 01:33:43.766914 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 01:33:43.773039 systemd[1]: Starting extend-filesystems.service... Sep 13 01:33:43.777575 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 01:33:43.793299 systemd[1]: Starting kubelet.service... Sep 13 01:33:43.798354 systemd[1]: Starting motdgen.service... Sep 13 01:33:43.803575 systemd[1]: Started nvidia.service. Sep 13 01:33:43.808934 systemd[1]: Starting prepare-helm.service... Sep 13 01:33:43.814229 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 01:33:43.820070 systemd[1]: Starting sshd-keygen.service... Sep 13 01:33:43.826433 systemd[1]: Starting systemd-logind.service... Sep 13 01:33:43.830920 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:43.830995 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 01:33:43.832393 systemd[1]: Starting update-engine.service... Sep 13 01:33:43.838174 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 01:33:43.851020 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 01:33:43.851281 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 01:33:43.863161 jq[1551]: false Sep 13 01:33:43.864829 jq[1566]: true Sep 13 01:33:43.880499 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 01:33:43.880746 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 01:33:43.898681 extend-filesystems[1552]: Found loop1 Sep 13 01:33:43.903191 extend-filesystems[1552]: Found sda Sep 13 01:33:43.903191 extend-filesystems[1552]: Found sda1 Sep 13 01:33:43.903191 extend-filesystems[1552]: Found sda2 Sep 13 01:33:43.903191 extend-filesystems[1552]: Found sda3 Sep 13 01:33:43.903191 extend-filesystems[1552]: Found usr Sep 13 01:33:43.903191 extend-filesystems[1552]: Found sda4 Sep 13 01:33:43.903191 extend-filesystems[1552]: Found sda6 Sep 13 01:33:43.903191 extend-filesystems[1552]: Found sda7 Sep 13 01:33:43.903191 extend-filesystems[1552]: Found sda9 Sep 13 01:33:43.903191 extend-filesystems[1552]: Checking size of /dev/sda9 Sep 13 01:33:43.957092 jq[1578]: true Sep 13 01:33:43.957011 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 01:33:43.957260 systemd[1]: Finished motdgen.service. Sep 13 01:33:44.004272 systemd-logind[1563]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 13 01:33:44.006465 systemd-logind[1563]: New seat seat0. Sep 13 01:33:44.035634 extend-filesystems[1552]: Old size kept for /dev/sda9 Sep 13 01:33:44.035634 extend-filesystems[1552]: Found sr0 Sep 13 01:33:44.083860 env[1591]: time="2025-09-13T01:33:44.078748399Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 01:33:44.084112 tar[1574]: linux-arm64/helm Sep 13 01:33:44.040974 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 01:33:44.041248 systemd[1]: Finished extend-filesystems.service. Sep 13 01:33:44.135226 bash[1607]: Updated "/home/core/.ssh/authorized_keys" Sep 13 01:33:44.135539 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 01:33:44.194347 env[1591]: time="2025-09-13T01:33:44.194297908Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 01:33:44.205804 env[1591]: time="2025-09-13T01:33:44.205748285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:44.212966 env[1591]: time="2025-09-13T01:33:44.212917850Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:44.213488 env[1591]: time="2025-09-13T01:33:44.213461481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:44.213925 env[1591]: time="2025-09-13T01:33:44.213899274Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:44.214963 env[1591]: time="2025-09-13T01:33:44.214938738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:44.215052 env[1591]: time="2025-09-13T01:33:44.215034936Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 01:33:44.215126 env[1591]: time="2025-09-13T01:33:44.215111135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:44.215324 env[1591]: time="2025-09-13T01:33:44.215296732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:44.215784 env[1591]: time="2025-09-13T01:33:44.215762444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:44.216144 env[1591]: time="2025-09-13T01:33:44.216120479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:44.217485 env[1591]: time="2025-09-13T01:33:44.217460737Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 01:33:44.217688 env[1591]: time="2025-09-13T01:33:44.217668854Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 01:33:44.217781 env[1591]: time="2025-09-13T01:33:44.217766452Z" level=info msg="metadata content store policy set" policy=shared Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233071007Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233122646Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233136166Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233183245Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233215525Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233292364Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233308163Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233673917Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233693877Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233707517Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233720197Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233734796Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233877354Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 01:33:44.235418 env[1591]: time="2025-09-13T01:33:44.233951393Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234294108Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234322707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234335787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234395466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234416666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234430945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234442265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234454825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234466385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234477145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234488624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234501584Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234627542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234643102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.235768 env[1591]: time="2025-09-13T01:33:44.234655062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.236060 env[1591]: time="2025-09-13T01:33:44.234667102Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 01:33:44.236060 env[1591]: time="2025-09-13T01:33:44.234681861Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 01:33:44.236060 env[1591]: time="2025-09-13T01:33:44.234692221Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 01:33:44.236060 env[1591]: time="2025-09-13T01:33:44.234709821Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 01:33:44.236060 env[1591]: time="2025-09-13T01:33:44.234745060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 01:33:44.236160 env[1591]: time="2025-09-13T01:33:44.234930617Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 01:33:44.236160 env[1591]: time="2025-09-13T01:33:44.234982137Z" level=info msg="Connect containerd service" Sep 13 01:33:44.236160 env[1591]: time="2025-09-13T01:33:44.235019056Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 01:33:44.252708 env[1591]: time="2025-09-13T01:33:44.236705589Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:33:44.252708 env[1591]: time="2025-09-13T01:33:44.236965185Z" level=info msg="Start subscribing containerd event" Sep 13 01:33:44.252708 env[1591]: time="2025-09-13T01:33:44.237008104Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 01:33:44.252708 env[1591]: time="2025-09-13T01:33:44.237016704Z" level=info msg="Start recovering state" Sep 13 01:33:44.252708 env[1591]: time="2025-09-13T01:33:44.237061183Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 01:33:44.252708 env[1591]: time="2025-09-13T01:33:44.237085143Z" level=info msg="Start event monitor" Sep 13 01:33:44.252708 env[1591]: time="2025-09-13T01:33:44.237119222Z" level=info msg="containerd successfully booted in 0.163821s" Sep 13 01:33:44.252708 env[1591]: time="2025-09-13T01:33:44.247193861Z" level=info msg="Start snapshots syncer" Sep 13 01:33:44.252708 env[1591]: time="2025-09-13T01:33:44.247268700Z" level=info msg="Start cni network conf syncer for default" Sep 13 01:33:44.252708 env[1591]: time="2025-09-13T01:33:44.247296659Z" level=info msg="Start streaming server" Sep 13 01:33:44.237215 systemd[1]: Started containerd.service. Sep 13 01:33:44.313953 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 01:33:44.599207 dbus-daemon[1550]: [system] SELinux support is enabled Sep 13 01:33:44.599435 systemd[1]: Started dbus.service. Sep 13 01:33:44.605486 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 01:33:44.605515 systemd[1]: Reached target system-config.target. Sep 13 01:33:44.610744 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 01:33:44.610769 systemd[1]: Reached target user-config.target. Sep 13 01:33:44.619700 systemd[1]: Started systemd-logind.service. Sep 13 01:33:44.707952 update_engine[1564]: I0913 01:33:44.692816 1564 main.cc:92] Flatcar Update Engine starting Sep 13 01:33:44.769614 tar[1574]: linux-arm64/LICENSE Sep 13 01:33:44.769731 tar[1574]: linux-arm64/README.md Sep 13 01:33:44.774315 systemd[1]: Started update-engine.service. Sep 13 01:33:44.774856 update_engine[1564]: I0913 01:33:44.774726 1564 update_check_scheduler.cc:74] Next update check in 9m21s Sep 13 01:33:44.781296 systemd[1]: Started locksmithd.service. Sep 13 01:33:44.792616 systemd[1]: Finished prepare-helm.service. Sep 13 01:33:45.014910 systemd[1]: Started kubelet.service. Sep 13 01:33:45.439187 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 01:33:45.458740 systemd[1]: Finished sshd-keygen.service. Sep 13 01:33:45.465534 systemd[1]: Starting issuegen.service... Sep 13 01:33:45.471626 systemd[1]: Started waagent.service. Sep 13 01:33:45.476941 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 01:33:45.477259 systemd[1]: Finished issuegen.service. Sep 13 01:33:45.483971 systemd[1]: Starting systemd-user-sessions.service... Sep 13 01:33:45.511837 kubelet[1674]: E0913 01:33:45.511794 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:33:45.513625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:33:45.513769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:33:45.546735 systemd[1]: Finished systemd-user-sessions.service. Sep 13 01:33:45.555154 systemd[1]: Started getty@tty1.service. Sep 13 01:33:45.563143 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 13 01:33:45.572663 systemd[1]: Reached target getty.target. Sep 13 01:33:45.577088 systemd[1]: Reached target multi-user.target. Sep 13 01:33:45.583549 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 01:33:45.591984 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 01:33:45.592234 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 01:33:45.599100 systemd[1]: Startup finished in 17.944s (kernel) + 33.250s (userspace) = 51.195s. Sep 13 01:33:46.256254 locksmithd[1668]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 01:33:46.573945 login[1701]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Sep 13 01:33:46.604047 login[1702]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:33:46.766341 systemd[1]: Created slice user-500.slice. Sep 13 01:33:46.767425 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 01:33:46.769663 systemd-logind[1563]: New session 1 of user core. Sep 13 01:33:46.823097 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 01:33:46.824496 systemd[1]: Starting user@500.service... Sep 13 01:33:46.923422 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:33:47.398156 systemd[1710]: Queued start job for default target default.target. Sep 13 01:33:47.398430 systemd[1710]: Reached target paths.target. Sep 13 01:33:47.398447 systemd[1710]: Reached target sockets.target. Sep 13 01:33:47.398458 systemd[1710]: Reached target timers.target. Sep 13 01:33:47.398468 systemd[1710]: Reached target basic.target. Sep 13 01:33:47.398597 systemd[1]: Started user@500.service. Sep 13 01:33:47.399438 systemd[1]: Started session-1.scope. Sep 13 01:33:47.399892 systemd[1710]: Reached target default.target. Sep 13 01:33:47.399936 systemd[1710]: Startup finished in 470ms. Sep 13 01:33:47.574264 login[1701]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:33:47.578717 systemd[1]: Started session-2.scope. Sep 13 01:33:47.579038 systemd-logind[1563]: New session 2 of user core. Sep 13 01:33:54.926502 waagent[1694]: 2025-09-13T01:33:54.926377Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 13 01:33:54.980962 waagent[1694]: 2025-09-13T01:33:54.980864Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 13 01:33:54.987014 waagent[1694]: 2025-09-13T01:33:54.986922Z INFO Daemon Daemon Python: 3.9.16 Sep 13 01:33:54.992914 waagent[1694]: 2025-09-13T01:33:54.992819Z INFO Daemon Daemon Run daemon Sep 13 01:33:54.998376 waagent[1694]: 2025-09-13T01:33:54.998293Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 13 01:33:55.032467 waagent[1694]: 2025-09-13T01:33:55.032297Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 01:33:55.050045 waagent[1694]: 2025-09-13T01:33:55.049895Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 01:33:55.062953 waagent[1694]: 2025-09-13T01:33:55.062853Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 01:33:55.069820 waagent[1694]: 2025-09-13T01:33:55.069722Z INFO Daemon Daemon Using waagent for provisioning Sep 13 01:33:55.077272 waagent[1694]: 2025-09-13T01:33:55.077187Z INFO Daemon Daemon Activate resource disk Sep 13 01:33:55.083265 waagent[1694]: 2025-09-13T01:33:55.083171Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 13 01:33:55.099727 waagent[1694]: 2025-09-13T01:33:55.099634Z INFO Daemon Daemon Found device: None Sep 13 01:33:55.105376 waagent[1694]: 2025-09-13T01:33:55.105286Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 13 01:33:55.115926 waagent[1694]: 2025-09-13T01:33:55.115834Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 13 01:33:55.129724 waagent[1694]: 2025-09-13T01:33:55.129648Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 01:33:55.136578 waagent[1694]: 2025-09-13T01:33:55.136487Z INFO Daemon Daemon Running default provisioning handler Sep 13 01:33:55.150494 waagent[1694]: 2025-09-13T01:33:55.150305Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 01:33:55.168617 waagent[1694]: 2025-09-13T01:33:55.168468Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 01:33:55.180629 waagent[1694]: 2025-09-13T01:33:55.180472Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 01:33:55.186581 waagent[1694]: 2025-09-13T01:33:55.186481Z INFO Daemon Daemon Copying ovf-env.xml Sep 13 01:33:55.370030 waagent[1694]: 2025-09-13T01:33:55.369882Z INFO Daemon Daemon Successfully mounted dvd Sep 13 01:33:55.501106 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 13 01:33:55.548207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 01:33:55.548369 systemd[1]: Stopped kubelet.service. Sep 13 01:33:55.549854 systemd[1]: Starting kubelet.service... Sep 13 01:33:55.560222 waagent[1694]: 2025-09-13T01:33:55.560063Z INFO Daemon Daemon Detect protocol endpoint Sep 13 01:33:55.565832 waagent[1694]: 2025-09-13T01:33:55.565741Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 01:33:55.572661 waagent[1694]: 2025-09-13T01:33:55.572562Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 13 01:33:55.580408 waagent[1694]: 2025-09-13T01:33:55.580291Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 13 01:33:55.587031 waagent[1694]: 2025-09-13T01:33:55.586925Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 13 01:33:55.593481 waagent[1694]: 2025-09-13T01:33:55.593353Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 13 01:33:55.881602 systemd[1]: Started kubelet.service. Sep 13 01:33:55.929769 kubelet[1752]: E0913 01:33:55.929718 1752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:33:55.932061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:33:55.932207 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:33:56.093827 waagent[1694]: 2025-09-13T01:33:56.093758Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 13 01:33:56.101646 waagent[1694]: 2025-09-13T01:33:56.101594Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 13 01:33:56.107558 waagent[1694]: 2025-09-13T01:33:56.107467Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 13 01:33:56.909012 waagent[1694]: 2025-09-13T01:33:56.908861Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 13 01:33:56.925058 waagent[1694]: 2025-09-13T01:33:56.924968Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 13 01:33:56.931435 waagent[1694]: 2025-09-13T01:33:56.931331Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 13 01:33:57.014162 waagent[1694]: 2025-09-13T01:33:57.014008Z INFO Daemon Daemon Found private key matching thumbprint 8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08 Sep 13 01:33:57.023793 waagent[1694]: 2025-09-13T01:33:57.023696Z INFO Daemon Daemon Fetch goal state completed Sep 13 01:33:57.075217 waagent[1694]: 2025-09-13T01:33:57.075155Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: be0cec48-46de-4eca-872c-80bb623af56d New eTag: 2726561473058756252] Sep 13 01:33:57.087596 waagent[1694]: 2025-09-13T01:33:57.087502Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 01:33:57.104450 waagent[1694]: 2025-09-13T01:33:57.104340Z INFO Daemon Daemon Starting provisioning Sep 13 01:33:57.110371 waagent[1694]: 2025-09-13T01:33:57.110271Z INFO Daemon Daemon Handle ovf-env.xml. Sep 13 01:33:57.115498 waagent[1694]: 2025-09-13T01:33:57.115410Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-8d5f1b2fe1] Sep 13 01:33:57.181904 waagent[1694]: 2025-09-13T01:33:57.181755Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-8d5f1b2fe1] Sep 13 01:33:57.191464 waagent[1694]: 2025-09-13T01:33:57.191332Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 13 01:33:57.199092 waagent[1694]: 2025-09-13T01:33:57.199004Z INFO Daemon Daemon Primary interface is [eth0] Sep 13 01:33:57.216491 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 13 01:33:57.216711 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 13 01:33:57.216764 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 13 01:33:57.216959 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:33:57.221454 systemd-networkd[1299]: eth0: DHCPv6 lease lost Sep 13 01:33:57.223752 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:33:57.224003 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:33:57.225947 systemd[1]: Starting systemd-networkd.service... Sep 13 01:33:57.261962 systemd-networkd[1769]: enP48491s1: Link UP Sep 13 01:33:57.261974 systemd-networkd[1769]: enP48491s1: Gained carrier Sep 13 01:33:57.263037 systemd-networkd[1769]: eth0: Link UP Sep 13 01:33:57.263048 systemd-networkd[1769]: eth0: Gained carrier Sep 13 01:33:57.263481 systemd-networkd[1769]: lo: Link UP Sep 13 01:33:57.263489 systemd-networkd[1769]: lo: Gained carrier Sep 13 01:33:57.263747 systemd-networkd[1769]: eth0: Gained IPv6LL Sep 13 01:33:57.263966 systemd-networkd[1769]: Enumeration completed Sep 13 01:33:57.264620 systemd-networkd[1769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:33:57.264828 systemd[1]: Started systemd-networkd.service. Sep 13 01:33:57.266762 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:33:57.269312 waagent[1694]: 2025-09-13T01:33:57.269084Z INFO Daemon Daemon Create user account if not exists Sep 13 01:33:57.276145 waagent[1694]: 2025-09-13T01:33:57.276047Z INFO Daemon Daemon User core already exists, skip useradd Sep 13 01:33:57.282484 waagent[1694]: 2025-09-13T01:33:57.282395Z INFO Daemon Daemon Configure sudoer Sep 13 01:33:57.288479 systemd-networkd[1769]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:33:57.295494 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:33:57.316469 waagent[1694]: 2025-09-13T01:33:57.316353Z INFO Daemon Daemon Configure sshd Sep 13 01:33:57.321333 waagent[1694]: 2025-09-13T01:33:57.321240Z INFO Daemon Daemon Deploy ssh public key. Sep 13 01:33:58.575716 waagent[1694]: 2025-09-13T01:33:58.575630Z INFO Daemon Daemon Provisioning complete Sep 13 01:33:58.596614 waagent[1694]: 2025-09-13T01:33:58.596541Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 13 01:33:58.603346 waagent[1694]: 2025-09-13T01:33:58.603252Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 13 01:33:58.614581 waagent[1694]: 2025-09-13T01:33:58.614477Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 13 01:33:58.927542 waagent[1777]: 2025-09-13T01:33:58.927370Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 13 01:33:58.928266 waagent[1777]: 2025-09-13T01:33:58.928200Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:58.928412 waagent[1777]: 2025-09-13T01:33:58.928354Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:58.941822 waagent[1777]: 2025-09-13T01:33:58.941722Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 13 01:33:58.942025 waagent[1777]: 2025-09-13T01:33:58.941976Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 13 01:33:59.004773 waagent[1777]: 2025-09-13T01:33:59.004624Z INFO ExtHandler ExtHandler Found private key matching thumbprint 8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08 Sep 13 01:33:59.005112 waagent[1777]: 2025-09-13T01:33:59.005058Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 13 01:33:59.020270 waagent[1777]: 2025-09-13T01:33:59.020209Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: a880e78d-c447-42a4-aabf-8405d93ae63c New eTag: 2726561473058756252] Sep 13 01:33:59.020939 waagent[1777]: 2025-09-13T01:33:59.020877Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 01:33:59.148213 waagent[1777]: 2025-09-13T01:33:59.148059Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 01:33:59.177291 waagent[1777]: 2025-09-13T01:33:59.177187Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1777 Sep 13 01:33:59.181255 waagent[1777]: 2025-09-13T01:33:59.181138Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 01:33:59.182555 waagent[1777]: 2025-09-13T01:33:59.182484Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 13 01:33:59.341871 waagent[1777]: 2025-09-13T01:33:59.341801Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 01:33:59.342304 waagent[1777]: 2025-09-13T01:33:59.342245Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 01:33:59.350329 waagent[1777]: 2025-09-13T01:33:59.350257Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 01:33:59.350926 waagent[1777]: 2025-09-13T01:33:59.350860Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 01:33:59.352134 waagent[1777]: 2025-09-13T01:33:59.352069Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 13 01:33:59.353600 waagent[1777]: 2025-09-13T01:33:59.353523Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 01:33:59.354238 waagent[1777]: 2025-09-13T01:33:59.354178Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:59.354534 waagent[1777]: 2025-09-13T01:33:59.354479Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:59.355214 waagent[1777]: 2025-09-13T01:33:59.355158Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 01:33:59.355615 waagent[1777]: 2025-09-13T01:33:59.355559Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 01:33:59.355615 waagent[1777]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 01:33:59.355615 waagent[1777]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 01:33:59.355615 waagent[1777]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 01:33:59.355615 waagent[1777]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:59.355615 waagent[1777]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:59.355615 waagent[1777]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:59.358060 waagent[1777]: 2025-09-13T01:33:59.357886Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 01:33:59.358899 waagent[1777]: 2025-09-13T01:33:59.358836Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:59.359167 waagent[1777]: 2025-09-13T01:33:59.359116Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:59.359868 waagent[1777]: 2025-09-13T01:33:59.359807Z INFO EnvHandler ExtHandler Configure routes Sep 13 01:33:59.360091 waagent[1777]: 2025-09-13T01:33:59.360045Z INFO EnvHandler ExtHandler Gateway:None Sep 13 01:33:59.360293 waagent[1777]: 2025-09-13T01:33:59.360249Z INFO EnvHandler ExtHandler Routes:None Sep 13 01:33:59.361290 waagent[1777]: 2025-09-13T01:33:59.361230Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 01:33:59.361398 waagent[1777]: 2025-09-13T01:33:59.361317Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 01:33:59.362221 waagent[1777]: 2025-09-13T01:33:59.362133Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 01:33:59.362310 waagent[1777]: 2025-09-13T01:33:59.362245Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 01:33:59.362752 waagent[1777]: 2025-09-13T01:33:59.362681Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 01:33:59.374446 waagent[1777]: 2025-09-13T01:33:59.374339Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 13 01:33:59.375542 waagent[1777]: 2025-09-13T01:33:59.375483Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 01:33:59.376530 waagent[1777]: 2025-09-13T01:33:59.376465Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 13 01:33:59.419482 waagent[1777]: 2025-09-13T01:33:59.419311Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1769' Sep 13 01:33:59.442412 waagent[1777]: 2025-09-13T01:33:59.442265Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 13 01:33:59.535685 waagent[1777]: 2025-09-13T01:33:59.535571Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 01:33:59.535685 waagent[1777]: Executing ['ip', '-a', '-o', 'link']: Sep 13 01:33:59.535685 waagent[1777]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 01:33:59.535685 waagent[1777]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:d2:06 brd ff:ff:ff:ff:ff:ff Sep 13 01:33:59.535685 waagent[1777]: 3: enP48491s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:d2:06 brd ff:ff:ff:ff:ff:ff\ altname enP48491p0s2 Sep 13 01:33:59.535685 waagent[1777]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 01:33:59.535685 waagent[1777]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 01:33:59.535685 waagent[1777]: 2: eth0 inet 10.200.20.20/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 01:33:59.535685 waagent[1777]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 01:33:59.535685 waagent[1777]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 01:33:59.535685 waagent[1777]: 2: eth0 inet6 fe80::222:48ff:fe7c:d206/64 scope link \ valid_lft forever preferred_lft forever Sep 13 01:33:59.852896 waagent[1777]: 2025-09-13T01:33:59.852825Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 13 01:34:00.619497 waagent[1694]: 2025-09-13T01:34:00.619357Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 13 01:34:00.625272 waagent[1694]: 2025-09-13T01:34:00.625212Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 13 01:34:01.973395 waagent[1806]: 2025-09-13T01:34:01.973291Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 13 01:34:01.974139 waagent[1806]: 2025-09-13T01:34:01.974069Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 13 01:34:01.974281 waagent[1806]: 2025-09-13T01:34:01.974235Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 13 01:34:01.974457 waagent[1806]: 2025-09-13T01:34:01.974408Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 13 01:34:01.989216 waagent[1806]: 2025-09-13T01:34:01.989080Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 01:34:01.989729 waagent[1806]: 2025-09-13T01:34:01.989670Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:34:01.989910 waagent[1806]: 2025-09-13T01:34:01.989858Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:34:01.990144 waagent[1806]: 2025-09-13T01:34:01.990095Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 13 01:34:02.004345 waagent[1806]: 2025-09-13T01:34:02.004252Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 01:34:02.017171 waagent[1806]: 2025-09-13T01:34:02.017105Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 13 01:34:02.018318 waagent[1806]: 2025-09-13T01:34:02.018259Z INFO ExtHandler Sep 13 01:34:02.018520 waagent[1806]: 2025-09-13T01:34:02.018472Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: af49078a-e484-4e01-8fb4-cfc98b921a76 eTag: 2726561473058756252 source: Fabric] Sep 13 01:34:02.019327 waagent[1806]: 2025-09-13T01:34:02.019271Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 13 01:34:02.020649 waagent[1806]: 2025-09-13T01:34:02.020587Z INFO ExtHandler Sep 13 01:34:02.020796 waagent[1806]: 2025-09-13T01:34:02.020751Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 13 01:34:02.028891 waagent[1806]: 2025-09-13T01:34:02.028835Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 13 01:34:02.029505 waagent[1806]: 2025-09-13T01:34:02.029454Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 01:34:02.050930 waagent[1806]: 2025-09-13T01:34:02.050866Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 13 01:34:02.122748 waagent[1806]: 2025-09-13T01:34:02.122610Z INFO ExtHandler Downloaded certificate {'thumbprint': '8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08', 'hasPrivateKey': True} Sep 13 01:34:02.124248 waagent[1806]: 2025-09-13T01:34:02.124183Z INFO ExtHandler Fetch goal state from WireServer completed Sep 13 01:34:02.125194 waagent[1806]: 2025-09-13T01:34:02.125136Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 13 01:34:02.146708 waagent[1806]: 2025-09-13T01:34:02.146570Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 13 01:34:02.155629 waagent[1806]: 2025-09-13T01:34:02.155501Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 01:34:02.159633 waagent[1806]: 2025-09-13T01:34:02.159508Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 13 01:34:02.159882 waagent[1806]: 2025-09-13T01:34:02.159828Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 13 01:34:02.385786 waagent[1806]: 2025-09-13T01:34:02.385658Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Sep 13 01:34:02.385786 waagent[1806]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:34:02.385786 waagent[1806]: pkts bytes target prot opt in out source destination Sep 13 01:34:02.385786 waagent[1806]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:34:02.385786 waagent[1806]: pkts bytes target prot opt in out source destination Sep 13 01:34:02.385786 waagent[1806]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:34:02.385786 waagent[1806]: pkts bytes target prot opt in out source destination Sep 13 01:34:02.385786 waagent[1806]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 01:34:02.385786 waagent[1806]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 01:34:02.385786 waagent[1806]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 01:34:02.387347 waagent[1806]: 2025-09-13T01:34:02.387285Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 13 01:34:02.390597 waagent[1806]: 2025-09-13T01:34:02.390481Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 13 01:34:02.391008 waagent[1806]: 2025-09-13T01:34:02.390958Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 01:34:02.391489 waagent[1806]: 2025-09-13T01:34:02.391436Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 01:34:02.399673 waagent[1806]: 2025-09-13T01:34:02.399610Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 01:34:02.400450 waagent[1806]: 2025-09-13T01:34:02.400375Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 01:34:02.409276 waagent[1806]: 2025-09-13T01:34:02.409193Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1806 Sep 13 01:34:02.412947 waagent[1806]: 2025-09-13T01:34:02.412865Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 01:34:02.413978 waagent[1806]: 2025-09-13T01:34:02.413922Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 13 01:34:02.415033 waagent[1806]: 2025-09-13T01:34:02.414979Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 13 01:34:02.417971 waagent[1806]: 2025-09-13T01:34:02.417909Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 13 01:34:02.418458 waagent[1806]: 2025-09-13T01:34:02.418402Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 13 01:34:02.419945 waagent[1806]: 2025-09-13T01:34:02.419879Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 01:34:02.420330 waagent[1806]: 2025-09-13T01:34:02.420263Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:34:02.420836 waagent[1806]: 2025-09-13T01:34:02.420772Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:34:02.421470 waagent[1806]: 2025-09-13T01:34:02.421373Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 01:34:02.422246 waagent[1806]: 2025-09-13T01:34:02.422180Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 01:34:02.422695 waagent[1806]: 2025-09-13T01:34:02.422634Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:34:02.422832 waagent[1806]: 2025-09-13T01:34:02.422767Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 01:34:02.422832 waagent[1806]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 01:34:02.422832 waagent[1806]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 01:34:02.422832 waagent[1806]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 01:34:02.422832 waagent[1806]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:34:02.422832 waagent[1806]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:34:02.422832 waagent[1806]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:34:02.423568 waagent[1806]: 2025-09-13T01:34:02.423481Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 01:34:02.423842 waagent[1806]: 2025-09-13T01:34:02.423791Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:34:02.423999 waagent[1806]: 2025-09-13T01:34:02.423946Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 01:34:02.427988 waagent[1806]: 2025-09-13T01:34:02.427866Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 01:34:02.428594 waagent[1806]: 2025-09-13T01:34:02.428536Z INFO EnvHandler ExtHandler Configure routes Sep 13 01:34:02.428803 waagent[1806]: 2025-09-13T01:34:02.428747Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 01:34:02.429783 waagent[1806]: 2025-09-13T01:34:02.429689Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 01:34:02.430253 waagent[1806]: 2025-09-13T01:34:02.430191Z INFO EnvHandler ExtHandler Gateway:None Sep 13 01:34:02.433171 waagent[1806]: 2025-09-13T01:34:02.432985Z INFO EnvHandler ExtHandler Routes:None Sep 13 01:34:02.447152 waagent[1806]: 2025-09-13T01:34:02.447069Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 01:34:02.447152 waagent[1806]: Executing ['ip', '-a', '-o', 'link']: Sep 13 01:34:02.447152 waagent[1806]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 01:34:02.447152 waagent[1806]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:d2:06 brd ff:ff:ff:ff:ff:ff Sep 13 01:34:02.447152 waagent[1806]: 3: enP48491s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:d2:06 brd ff:ff:ff:ff:ff:ff\ altname enP48491p0s2 Sep 13 01:34:02.447152 waagent[1806]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 01:34:02.447152 waagent[1806]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 01:34:02.447152 waagent[1806]: 2: eth0 inet 10.200.20.20/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 01:34:02.447152 waagent[1806]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 01:34:02.447152 waagent[1806]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 01:34:02.447152 waagent[1806]: 2: eth0 inet6 fe80::222:48ff:fe7c:d206/64 scope link \ valid_lft forever preferred_lft forever Sep 13 01:34:02.455505 waagent[1806]: 2025-09-13T01:34:02.455407Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 13 01:34:02.520366 waagent[1806]: 2025-09-13T01:34:02.520180Z INFO ExtHandler ExtHandler Sep 13 01:34:02.520682 waagent[1806]: 2025-09-13T01:34:02.520610Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 0086f9c5-9573-4510-8164-c4650bf2b514 correlation 6268f695-5b55-493d-b770-d45235bb3766 created: 2025-09-13T01:32:06.494156Z] Sep 13 01:34:02.522287 waagent[1806]: 2025-09-13T01:34:02.522206Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 13 01:34:02.523249 waagent[1806]: 2025-09-13T01:34:02.523176Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 01:34:02.526232 waagent[1806]: 2025-09-13T01:34:02.526157Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms] Sep 13 01:34:02.545150 waagent[1806]: 2025-09-13T01:34:02.545079Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 13 01:34:02.556221 waagent[1806]: 2025-09-13T01:34:02.556149Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 13 01:34:02.562686 waagent[1806]: 2025-09-13T01:34:02.562614Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A5CE4714-BF17-4FEC-A916-27E64CA744DD;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 13 01:34:06.048267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 01:34:06.048470 systemd[1]: Stopped kubelet.service. Sep 13 01:34:06.049968 systemd[1]: Starting kubelet.service... Sep 13 01:34:06.367070 systemd[1]: Started kubelet.service. Sep 13 01:34:06.411547 kubelet[1855]: E0913 01:34:06.411493 1855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:06.413364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:06.413518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:16.548259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 01:34:16.548457 systemd[1]: Stopped kubelet.service. Sep 13 01:34:16.549891 systemd[1]: Starting kubelet.service... Sep 13 01:34:16.858097 systemd[1]: Started kubelet.service. Sep 13 01:34:16.909211 kubelet[1870]: E0913 01:34:16.909170 1870 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:16.911046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:16.911185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:18.393008 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 13 01:34:23.325152 systemd[1]: Created slice system-sshd.slice. Sep 13 01:34:23.326372 systemd[1]: Started sshd@0-10.200.20.20:22-10.200.16.10:58012.service. Sep 13 01:34:24.101491 sshd[1876]: Accepted publickey for core from 10.200.16.10 port 58012 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:24.118533 sshd[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:24.122974 systemd[1]: Started session-3.scope. Sep 13 01:34:24.123975 systemd-logind[1563]: New session 3 of user core. Sep 13 01:34:24.500312 systemd[1]: Started sshd@1-10.200.20.20:22-10.200.16.10:58014.service. Sep 13 01:34:24.913176 sshd[1881]: Accepted publickey for core from 10.200.16.10 port 58014 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:24.914495 sshd[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:24.918512 systemd-logind[1563]: New session 4 of user core. Sep 13 01:34:24.918891 systemd[1]: Started session-4.scope. Sep 13 01:34:25.240617 sshd[1881]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:25.243932 systemd[1]: sshd@1-10.200.20.20:22-10.200.16.10:58014.service: Deactivated successfully. Sep 13 01:34:25.245286 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 01:34:25.245826 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Sep 13 01:34:25.246575 systemd-logind[1563]: Removed session 4. Sep 13 01:34:25.307330 systemd[1]: Started sshd@2-10.200.20.20:22-10.200.16.10:58028.service. Sep 13 01:34:25.717834 sshd[1888]: Accepted publickey for core from 10.200.16.10 port 58028 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:25.720364 sshd[1888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:25.724180 systemd-logind[1563]: New session 5 of user core. Sep 13 01:34:25.724725 systemd[1]: Started session-5.scope. Sep 13 01:34:26.028499 sshd[1888]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:26.032380 systemd[1]: sshd@2-10.200.20.20:22-10.200.16.10:58028.service: Deactivated successfully. Sep 13 01:34:26.033512 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Sep 13 01:34:26.033735 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 01:34:26.034609 systemd-logind[1563]: Removed session 5. Sep 13 01:34:26.101078 systemd[1]: Started sshd@3-10.200.20.20:22-10.200.16.10:58040.service. Sep 13 01:34:26.511171 sshd[1895]: Accepted publickey for core from 10.200.16.10 port 58040 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:26.512817 sshd[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:26.517078 systemd[1]: Started session-6.scope. Sep 13 01:34:26.517723 systemd-logind[1563]: New session 6 of user core. Sep 13 01:34:26.838610 sshd[1895]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:26.841291 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Sep 13 01:34:26.842039 systemd[1]: sshd@3-10.200.20.20:22-10.200.16.10:58040.service: Deactivated successfully. Sep 13 01:34:26.842813 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 01:34:26.843449 systemd-logind[1563]: Removed session 6. Sep 13 01:34:26.905690 systemd[1]: Started sshd@4-10.200.20.20:22-10.200.16.10:58044.service. Sep 13 01:34:27.048291 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 01:34:27.048489 systemd[1]: Stopped kubelet.service. Sep 13 01:34:27.050035 systemd[1]: Starting kubelet.service... Sep 13 01:34:27.209275 systemd[1]: Started kubelet.service. Sep 13 01:34:27.245529 kubelet[1912]: E0913 01:34:27.245485 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:27.247294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:27.247466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:27.323116 sshd[1902]: Accepted publickey for core from 10.200.16.10 port 58044 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:27.324441 sshd[1902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:27.328657 systemd[1]: Started session-7.scope. Sep 13 01:34:27.329457 systemd-logind[1563]: New session 7 of user core. Sep 13 01:34:27.982576 sudo[1921]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 01:34:27.982808 sudo[1921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 01:34:28.019034 systemd[1]: Starting docker.service... Sep 13 01:34:28.078723 env[1931]: time="2025-09-13T01:34:28.078674767Z" level=info msg="Starting up" Sep 13 01:34:28.079871 env[1931]: time="2025-09-13T01:34:28.079845286Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:34:28.079972 env[1931]: time="2025-09-13T01:34:28.079951526Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:34:28.080046 env[1931]: time="2025-09-13T01:34:28.080031046Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:34:28.080116 env[1931]: time="2025-09-13T01:34:28.080091646Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:34:28.082152 env[1931]: time="2025-09-13T01:34:28.082130724Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:34:28.082242 env[1931]: time="2025-09-13T01:34:28.082229124Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:34:28.082300 env[1931]: time="2025-09-13T01:34:28.082287444Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:34:28.082349 env[1931]: time="2025-09-13T01:34:28.082338044Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:34:28.088611 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport177890735-merged.mount: Deactivated successfully. Sep 13 01:34:28.157550 env[1931]: time="2025-09-13T01:34:28.157515293Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 01:34:28.157744 env[1931]: time="2025-09-13T01:34:28.157730453Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 01:34:28.157956 env[1931]: time="2025-09-13T01:34:28.157940493Z" level=info msg="Loading containers: start." Sep 13 01:34:28.404407 kernel: Initializing XFRM netlink socket Sep 13 01:34:28.446077 env[1931]: time="2025-09-13T01:34:28.446042423Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 01:34:28.647674 systemd-networkd[1769]: docker0: Link UP Sep 13 01:34:28.678832 env[1931]: time="2025-09-13T01:34:28.678615125Z" level=info msg="Loading containers: done." Sep 13 01:34:28.709136 env[1931]: time="2025-09-13T01:34:28.709094577Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 01:34:28.709539 env[1931]: time="2025-09-13T01:34:28.709519137Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 01:34:28.709746 env[1931]: time="2025-09-13T01:34:28.709730936Z" level=info msg="Daemon has completed initialization" Sep 13 01:34:28.749633 systemd[1]: Started docker.service. Sep 13 01:34:28.750412 env[1931]: time="2025-09-13T01:34:28.750350258Z" level=info msg="API listen on /run/docker.sock" Sep 13 01:34:30.405470 update_engine[1564]: I0913 01:34:30.405431 1564 update_attempter.cc:509] Updating boot flags... Sep 13 01:34:32.671515 env[1591]: time="2025-09-13T01:34:32.671458827Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 01:34:33.804264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303116949.mount: Deactivated successfully. Sep 13 01:34:35.308564 env[1591]: time="2025-09-13T01:34:35.308518812Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:35.316479 env[1591]: time="2025-09-13T01:34:35.316424928Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:35.324645 env[1591]: time="2025-09-13T01:34:35.324609243Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:35.329938 env[1591]: time="2025-09-13T01:34:35.329896879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:35.331009 env[1591]: time="2025-09-13T01:34:35.330980199Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 01:34:35.333140 env[1591]: time="2025-09-13T01:34:35.333113318Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 01:34:36.688824 env[1591]: time="2025-09-13T01:34:36.688779736Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:36.695258 env[1591]: time="2025-09-13T01:34:36.695207132Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:36.699416 env[1591]: time="2025-09-13T01:34:36.699365650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:36.703830 env[1591]: time="2025-09-13T01:34:36.703793127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:36.704989 env[1591]: time="2025-09-13T01:34:36.704949367Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 01:34:36.706576 env[1591]: time="2025-09-13T01:34:36.706543526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 01:34:37.298302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 01:34:37.298494 systemd[1]: Stopped kubelet.service. Sep 13 01:34:37.299962 systemd[1]: Starting kubelet.service... Sep 13 01:34:37.403441 systemd[1]: Started kubelet.service. Sep 13 01:34:37.570041 kubelet[2091]: E0913 01:34:37.569933 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:37.571801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:37.571947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:38.357573 env[1591]: time="2025-09-13T01:34:38.357522502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:38.364432 env[1591]: time="2025-09-13T01:34:38.364364019Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:38.368755 env[1591]: time="2025-09-13T01:34:38.368714817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:38.373899 env[1591]: time="2025-09-13T01:34:38.373859214Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:38.374736 env[1591]: time="2025-09-13T01:34:38.374706014Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 01:34:38.375402 env[1591]: time="2025-09-13T01:34:38.375365854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 01:34:39.613471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681176078.mount: Deactivated successfully. Sep 13 01:34:40.103762 env[1591]: time="2025-09-13T01:34:40.103719620Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:40.111607 env[1591]: time="2025-09-13T01:34:40.111567664Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:40.115551 env[1591]: time="2025-09-13T01:34:40.115500646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:40.119795 env[1591]: time="2025-09-13T01:34:40.119761950Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:40.120099 env[1591]: time="2025-09-13T01:34:40.120069872Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 01:34:40.120751 env[1591]: time="2025-09-13T01:34:40.120644715Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 01:34:40.756424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2152384721.mount: Deactivated successfully. Sep 13 01:34:42.320534 env[1591]: time="2025-09-13T01:34:42.320484446Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.329487 env[1591]: time="2025-09-13T01:34:42.329443120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.336373 env[1591]: time="2025-09-13T01:34:42.336333846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.343168 env[1591]: time="2025-09-13T01:34:42.343131886Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.343514 env[1591]: time="2025-09-13T01:34:42.343489065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 01:34:42.344427 env[1591]: time="2025-09-13T01:34:42.344400713Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 01:34:42.905295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1991725650.mount: Deactivated successfully. Sep 13 01:34:42.927725 env[1591]: time="2025-09-13T01:34:42.927652875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.937591 env[1591]: time="2025-09-13T01:34:42.937544999Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.942162 env[1591]: time="2025-09-13T01:34:42.942128362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.945930 env[1591]: time="2025-09-13T01:34:42.945897042Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.946318 env[1591]: time="2025-09-13T01:34:42.946290423Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 01:34:42.946901 env[1591]: time="2025-09-13T01:34:42.946872894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 01:34:43.634246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718092720.mount: Deactivated successfully. Sep 13 01:34:47.156774 env[1591]: time="2025-09-13T01:34:47.156716343Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:47.168112 env[1591]: time="2025-09-13T01:34:47.168066587Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:47.172763 env[1591]: time="2025-09-13T01:34:47.172716201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:47.177768 env[1591]: time="2025-09-13T01:34:47.177717632Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:47.178791 env[1591]: time="2025-09-13T01:34:47.178760680Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 01:34:47.636460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 13 01:34:47.636669 systemd[1]: Stopped kubelet.service. Sep 13 01:34:47.638296 systemd[1]: Starting kubelet.service... Sep 13 01:34:47.731256 systemd[1]: Started kubelet.service. Sep 13 01:34:47.821928 kubelet[2122]: E0913 01:34:47.821863 2122 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:47.823818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:47.823988 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:54.140407 systemd[1]: Stopped kubelet.service. Sep 13 01:34:54.143658 systemd[1]: Starting kubelet.service... Sep 13 01:34:54.174707 systemd[1]: Reloading. Sep 13 01:34:54.244701 /usr/lib/systemd/system-generators/torcx-generator[2158]: time="2025-09-13T01:34:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:34:54.244736 /usr/lib/systemd/system-generators/torcx-generator[2158]: time="2025-09-13T01:34:54Z" level=info msg="torcx already run" Sep 13 01:34:54.339735 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:34:54.339755 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:34:54.359113 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:34:54.463029 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 01:34:54.463104 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 01:34:54.463426 systemd[1]: Stopped kubelet.service. Sep 13 01:34:54.465587 systemd[1]: Starting kubelet.service... Sep 13 01:34:54.627899 systemd[1]: Started kubelet.service. Sep 13 01:34:54.667402 kubelet[2236]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:54.667773 kubelet[2236]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 01:34:54.667822 kubelet[2236]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:54.668004 kubelet[2236]: I0913 01:34:54.667969 2236 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:34:56.032716 kubelet[2236]: I0913 01:34:56.032673 2236 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 01:34:56.033087 kubelet[2236]: I0913 01:34:56.033075 2236 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:34:56.033414 kubelet[2236]: I0913 01:34:56.033399 2236 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 01:34:56.055236 kubelet[2236]: E0913 01:34:56.055174 2236 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:56.057253 kubelet[2236]: I0913 01:34:56.057200 2236 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:34:56.065617 kubelet[2236]: E0913 01:34:56.065534 2236 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:34:56.065617 kubelet[2236]: I0913 01:34:56.065615 2236 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:34:56.069735 kubelet[2236]: I0913 01:34:56.069708 2236 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:34:56.070721 kubelet[2236]: I0913 01:34:56.070699 2236 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 01:34:56.070875 kubelet[2236]: I0913 01:34:56.070841 2236 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:34:56.071058 kubelet[2236]: I0913 01:34:56.070875 2236 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-8d5f1b2fe1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 01:34:56.071146 kubelet[2236]: I0913 01:34:56.071064 2236 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:34:56.071146 kubelet[2236]: I0913 01:34:56.071073 2236 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 01:34:56.071192 kubelet[2236]: I0913 01:34:56.071183 2236 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:56.076181 kubelet[2236]: W0913 01:34:56.076130 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8d5f1b2fe1&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Sep 13 01:34:56.076245 kubelet[2236]: E0913 01:34:56.076199 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8d5f1b2fe1&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:56.076890 kubelet[2236]: I0913 01:34:56.076870 2236 kubelet.go:408] "Attempting to sync node with API server" Sep 13 01:34:56.076984 kubelet[2236]: I0913 01:34:56.076973 2236 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:34:56.077061 kubelet[2236]: I0913 01:34:56.077052 2236 kubelet.go:314] "Adding apiserver pod source" Sep 13 01:34:56.079445 kubelet[2236]: I0913 01:34:56.079416 2236 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:34:56.085961 kubelet[2236]: W0913 01:34:56.085905 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Sep 13 01:34:56.086144 kubelet[2236]: E0913 01:34:56.086125 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:56.091707 kubelet[2236]: I0913 01:34:56.091674 2236 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:34:56.092183 kubelet[2236]: I0913 01:34:56.092154 2236 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:34:56.092219 kubelet[2236]: W0913 01:34:56.092213 2236 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 01:34:56.092826 kubelet[2236]: I0913 01:34:56.092799 2236 server.go:1274] "Started kubelet" Sep 13 01:34:56.093496 kubelet[2236]: I0913 01:34:56.093459 2236 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:34:56.096125 kubelet[2236]: I0913 01:34:56.096071 2236 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:34:56.096305 kubelet[2236]: I0913 01:34:56.096289 2236 server.go:449] "Adding debug handlers to kubelet server" Sep 13 01:34:56.096437 kubelet[2236]: I0913 01:34:56.096421 2236 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:34:56.097626 kubelet[2236]: E0913 01:34:56.096580 2236 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-8d5f1b2fe1.1864b3a9b79e2719 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-8d5f1b2fe1,UID:ci-3510.3.8-n-8d5f1b2fe1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-8d5f1b2fe1,},FirstTimestamp:2025-09-13 01:34:56.092776217 +0000 UTC m=+1.455481169,LastTimestamp:2025-09-13 01:34:56.092776217 +0000 UTC m=+1.455481169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-8d5f1b2fe1,}" Sep 13 01:34:56.105745 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 01:34:56.105859 kubelet[2236]: E0913 01:34:56.100591 2236 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:34:56.106072 kubelet[2236]: I0913 01:34:56.106059 2236 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:34:56.109047 kubelet[2236]: I0913 01:34:56.106314 2236 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:34:56.110064 kubelet[2236]: E0913 01:34:56.110036 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-8d5f1b2fe1\" not found" Sep 13 01:34:56.110192 kubelet[2236]: I0913 01:34:56.110181 2236 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 01:34:56.110547 kubelet[2236]: I0913 01:34:56.110526 2236 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 01:34:56.110687 kubelet[2236]: I0913 01:34:56.110676 2236 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:34:56.111707 kubelet[2236]: W0913 01:34:56.111669 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Sep 13 01:34:56.111839 kubelet[2236]: E0913 01:34:56.111821 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:56.112102 kubelet[2236]: I0913 01:34:56.112086 2236 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:34:56.112276 kubelet[2236]: I0913 01:34:56.112256 2236 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:34:56.115616 kubelet[2236]: E0913 01:34:56.115578 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8d5f1b2fe1?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="200ms" Sep 13 01:34:56.116001 kubelet[2236]: I0913 01:34:56.115981 2236 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:34:56.197591 kubelet[2236]: I0913 01:34:56.197541 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:34:56.198752 kubelet[2236]: I0913 01:34:56.198724 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:34:56.198912 kubelet[2236]: I0913 01:34:56.198902 2236 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 01:34:56.199000 kubelet[2236]: I0913 01:34:56.198991 2236 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 01:34:56.199155 kubelet[2236]: E0913 01:34:56.199136 2236 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:34:56.199809 kubelet[2236]: W0913 01:34:56.199786 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Sep 13 01:34:56.199942 kubelet[2236]: E0913 01:34:56.199924 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:56.210467 kubelet[2236]: E0913 01:34:56.210429 2236 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-8d5f1b2fe1\" not found" Sep 13 01:34:56.217159 kubelet[2236]: I0913 01:34:56.217136 2236 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 01:34:56.217314 kubelet[2236]: I0913 01:34:56.217302 2236 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 01:34:56.217395 kubelet[2236]: I0913 01:34:56.217378 2236 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:56.222843 kubelet[2236]: I0913 01:34:56.222800 2236 policy_none.go:49] "None policy: Start" Sep 13 01:34:56.223827 kubelet[2236]: I0913 01:34:56.223804 2236 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 01:34:56.223915 kubelet[2236]: I0913 01:34:56.223835 2236 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:34:56.231719 kubelet[2236]: I0913 01:34:56.231688 2236 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:34:56.231860 kubelet[2236]: I0913 01:34:56.231842 2236 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:34:56.231894 kubelet[2236]: I0913 01:34:56.231860 2236 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:34:56.233551 kubelet[2236]: I0913 01:34:56.233532 2236 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:34:56.236741 kubelet[2236]: E0913 01:34:56.236714 2236 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-8d5f1b2fe1\" not found" Sep 13 01:34:56.311660 kubelet[2236]: I0913 01:34:56.311618 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ea3fc5c5bbfc9c1b55576c51d28205c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"7ea3fc5c5bbfc9c1b55576c51d28205c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.311859 kubelet[2236]: I0913 01:34:56.311842 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ac257fe99d39c2b070b4898a4e95ca0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8ac257fe99d39c2b070b4898a4e95ca0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.311983 kubelet[2236]: I0913 01:34:56.311969 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ac257fe99d39c2b070b4898a4e95ca0-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8ac257fe99d39c2b070b4898a4e95ca0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.312087 kubelet[2236]: I0913 01:34:56.312074 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8ac257fe99d39c2b070b4898a4e95ca0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8ac257fe99d39c2b070b4898a4e95ca0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.312192 kubelet[2236]: I0913 01:34:56.312179 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ac257fe99d39c2b070b4898a4e95ca0-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8ac257fe99d39c2b070b4898a4e95ca0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.312289 kubelet[2236]: I0913 01:34:56.312278 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ac257fe99d39c2b070b4898a4e95ca0-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8ac257fe99d39c2b070b4898a4e95ca0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.312424 kubelet[2236]: I0913 01:34:56.312372 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8edd83e53ac08c0d41e7b31c57a51432-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8edd83e53ac08c0d41e7b31c57a51432\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.312524 kubelet[2236]: I0913 01:34:56.312511 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ea3fc5c5bbfc9c1b55576c51d28205c-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"7ea3fc5c5bbfc9c1b55576c51d28205c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.312633 kubelet[2236]: I0913 01:34:56.312621 2236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ea3fc5c5bbfc9c1b55576c51d28205c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"7ea3fc5c5bbfc9c1b55576c51d28205c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.317014 kubelet[2236]: E0913 01:34:56.316983 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8d5f1b2fe1?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="400ms" Sep 13 01:34:56.333678 kubelet[2236]: I0913 01:34:56.333647 2236 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.334183 kubelet[2236]: E0913 01:34:56.334153 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.536288 kubelet[2236]: I0913 01:34:56.536243 2236 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.536800 kubelet[2236]: E0913 01:34:56.536777 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.610099 env[1591]: time="2025-09-13T01:34:56.609985602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1,Uid:8ac257fe99d39c2b070b4898a4e95ca0,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:56.610937 env[1591]: time="2025-09-13T01:34:56.610902035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-8d5f1b2fe1,Uid:8edd83e53ac08c0d41e7b31c57a51432,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:56.611593 env[1591]: time="2025-09-13T01:34:56.611490737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1,Uid:7ea3fc5c5bbfc9c1b55576c51d28205c,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:56.717780 kubelet[2236]: E0913 01:34:56.717733 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8d5f1b2fe1?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="800ms" Sep 13 01:34:56.939371 kubelet[2236]: I0913 01:34:56.939252 2236 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:56.939971 kubelet[2236]: E0913 01:34:56.939943 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:57.173871 kubelet[2236]: W0913 01:34:57.173806 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Sep 13 01:34:57.174225 kubelet[2236]: E0913 01:34:57.173879 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:57.208908 kubelet[2236]: W0913 01:34:57.208738 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8d5f1b2fe1&limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Sep 13 01:34:57.208908 kubelet[2236]: E0913 01:34:57.208803 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8d5f1b2fe1&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:57.288289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575886674.mount: Deactivated successfully. Sep 13 01:34:57.309665 env[1591]: time="2025-09-13T01:34:57.309620783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.323795 env[1591]: time="2025-09-13T01:34:57.323739880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.327514 env[1591]: time="2025-09-13T01:34:57.327475372Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.330381 env[1591]: time="2025-09-13T01:34:57.330337273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.344713 env[1591]: time="2025-09-13T01:34:57.344665177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.347864 env[1591]: time="2025-09-13T01:34:57.347828889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.350913 env[1591]: time="2025-09-13T01:34:57.350864756Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.357466 env[1591]: time="2025-09-13T01:34:57.357380225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.360499 env[1591]: time="2025-09-13T01:34:57.360464574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.364180 env[1591]: time="2025-09-13T01:34:57.364131223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.367726 env[1591]: time="2025-09-13T01:34:57.367690509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.377896 env[1591]: time="2025-09-13T01:34:57.377845906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:57.455382 env[1591]: time="2025-09-13T01:34:57.455192231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:57.455382 env[1591]: time="2025-09-13T01:34:57.455238752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:57.455382 env[1591]: time="2025-09-13T01:34:57.455249313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:57.455755 env[1591]: time="2025-09-13T01:34:57.455717689Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a4c3313ded6ac16634fa4a84e6b54d734beb795cb177e2cb30311eea60e1d7a pid=2276 runtime=io.containerd.runc.v2 Sep 13 01:34:57.465468 env[1591]: time="2025-09-13T01:34:57.465304947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:57.465915 env[1591]: time="2025-09-13T01:34:57.465878567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:57.466035 env[1591]: time="2025-09-13T01:34:57.466014132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:57.466290 env[1591]: time="2025-09-13T01:34:57.466260021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f27b98cff8dd6576da61a5b4e8799ea4680f0a1ce27ea9519f9fda7f118e6779 pid=2298 runtime=io.containerd.runc.v2 Sep 13 01:34:57.472319 env[1591]: time="2025-09-13T01:34:57.472137028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:57.472319 env[1591]: time="2025-09-13T01:34:57.472280113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:57.472530 env[1591]: time="2025-09-13T01:34:57.472291753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:57.472530 env[1591]: time="2025-09-13T01:34:57.472450879Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a731f3888d0627b119c457ea635ebfff1875e1abc79b65f6845106d8aafd9017 pid=2313 runtime=io.containerd.runc.v2 Sep 13 01:34:57.473673 kubelet[2236]: W0913 01:34:57.473587 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Sep 13 01:34:57.473673 kubelet[2236]: E0913 01:34:57.473629 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:57.520827 kubelet[2236]: E0913 01:34:57.520773 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8d5f1b2fe1?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="1.6s" Sep 13 01:34:57.553777 env[1591]: time="2025-09-13T01:34:57.553726622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1,Uid:7ea3fc5c5bbfc9c1b55576c51d28205c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a4c3313ded6ac16634fa4a84e6b54d734beb795cb177e2cb30311eea60e1d7a\"" Sep 13 01:34:57.557217 env[1591]: time="2025-09-13T01:34:57.557175383Z" level=info msg="CreateContainer within sandbox \"7a4c3313ded6ac16634fa4a84e6b54d734beb795cb177e2cb30311eea60e1d7a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 01:34:57.564465 env[1591]: time="2025-09-13T01:34:57.564424879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-8d5f1b2fe1,Uid:8edd83e53ac08c0d41e7b31c57a51432,Namespace:kube-system,Attempt:0,} returns sandbox id \"a731f3888d0627b119c457ea635ebfff1875e1abc79b65f6845106d8aafd9017\"" Sep 13 01:34:57.564727 env[1591]: time="2025-09-13T01:34:57.564706208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1,Uid:8ac257fe99d39c2b070b4898a4e95ca0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f27b98cff8dd6576da61a5b4e8799ea4680f0a1ce27ea9519f9fda7f118e6779\"" Sep 13 01:34:57.567880 env[1591]: time="2025-09-13T01:34:57.567844519Z" level=info msg="CreateContainer within sandbox \"f27b98cff8dd6576da61a5b4e8799ea4680f0a1ce27ea9519f9fda7f118e6779\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 01:34:57.568201 env[1591]: time="2025-09-13T01:34:57.568180851Z" level=info msg="CreateContainer within sandbox \"a731f3888d0627b119c457ea635ebfff1875e1abc79b65f6845106d8aafd9017\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 01:34:57.609589 kubelet[2236]: W0913 01:34:57.609549 2236 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.20:6443: connect: connection refused Sep 13 01:34:57.609738 kubelet[2236]: E0913 01:34:57.609599 2236 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:57.634873 env[1591]: time="2025-09-13T01:34:57.634821198Z" level=info msg="CreateContainer within sandbox \"7a4c3313ded6ac16634fa4a84e6b54d734beb795cb177e2cb30311eea60e1d7a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a996ecdee88806ab8b5faee7ef60b45cffe0db310c61ecbba3670b8c792ca096\"" Sep 13 01:34:57.635547 env[1591]: time="2025-09-13T01:34:57.635518303Z" level=info msg="StartContainer for \"a996ecdee88806ab8b5faee7ef60b45cffe0db310c61ecbba3670b8c792ca096\"" Sep 13 01:34:57.639127 env[1591]: time="2025-09-13T01:34:57.639089229Z" level=info msg="CreateContainer within sandbox \"f27b98cff8dd6576da61a5b4e8799ea4680f0a1ce27ea9519f9fda7f118e6779\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8cd88fcd1e8a56a900d39d0793edd09cd21fc9cb55ba0c4f82322e22af6f0677\"" Sep 13 01:34:57.639788 env[1591]: time="2025-09-13T01:34:57.639758172Z" level=info msg="StartContainer for \"8cd88fcd1e8a56a900d39d0793edd09cd21fc9cb55ba0c4f82322e22af6f0677\"" Sep 13 01:34:57.659553 env[1591]: time="2025-09-13T01:34:57.659499508Z" level=info msg="CreateContainer within sandbox \"a731f3888d0627b119c457ea635ebfff1875e1abc79b65f6845106d8aafd9017\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"54bc0e911174d610c224209f40adacbd6981eb38ce0ee6cf6fd60e07d628ee93\"" Sep 13 01:34:57.663624 env[1591]: time="2025-09-13T01:34:57.663578891Z" level=info msg="StartContainer for \"54bc0e911174d610c224209f40adacbd6981eb38ce0ee6cf6fd60e07d628ee93\"" Sep 13 01:34:57.738660 env[1591]: time="2025-09-13T01:34:57.738268602Z" level=info msg="StartContainer for \"a996ecdee88806ab8b5faee7ef60b45cffe0db310c61ecbba3670b8c792ca096\" returns successfully" Sep 13 01:34:57.748073 kubelet[2236]: I0913 01:34:57.742206 2236 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:57.749584 kubelet[2236]: E0913 01:34:57.749522 2236 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:34:57.759866 env[1591]: time="2025-09-13T01:34:57.759810561Z" level=info msg="StartContainer for \"8cd88fcd1e8a56a900d39d0793edd09cd21fc9cb55ba0c4f82322e22af6f0677\" returns successfully" Sep 13 01:34:57.779883 env[1591]: time="2025-09-13T01:34:57.779823586Z" level=info msg="StartContainer for \"54bc0e911174d610c224209f40adacbd6981eb38ce0ee6cf6fd60e07d628ee93\" returns successfully" Sep 13 01:34:59.351033 kubelet[2236]: I0913 01:34:59.351000 2236 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:01.487264 kubelet[2236]: E0913 01:35:01.487225 2236 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-8d5f1b2fe1\" not found" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:01.552248 kubelet[2236]: I0913 01:35:01.552212 2236 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:01.623869 kubelet[2236]: E0913 01:35:01.623759 2236 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-n-8d5f1b2fe1.1864b3a9b79e2719 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-8d5f1b2fe1,UID:ci-3510.3.8-n-8d5f1b2fe1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-8d5f1b2fe1,},FirstTimestamp:2025-09-13 01:34:56.092776217 +0000 UTC m=+1.455481169,LastTimestamp:2025-09-13 01:34:56.092776217 +0000 UTC m=+1.455481169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-8d5f1b2fe1,}" Sep 13 01:35:02.088090 kubelet[2236]: I0913 01:35:02.088061 2236 apiserver.go:52] "Watching apiserver" Sep 13 01:35:02.111361 kubelet[2236]: I0913 01:35:02.111306 2236 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 01:35:03.709497 systemd[1]: Reloading. Sep 13 01:35:03.778301 /usr/lib/systemd/system-generators/torcx-generator[2532]: time="2025-09-13T01:35:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:35:03.778333 /usr/lib/systemd/system-generators/torcx-generator[2532]: time="2025-09-13T01:35:03Z" level=info msg="torcx already run" Sep 13 01:35:03.870098 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:35:03.870120 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:35:03.889564 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:35:03.980054 systemd[1]: Stopping kubelet.service... Sep 13 01:35:04.001752 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:35:04.002060 systemd[1]: Stopped kubelet.service. Sep 13 01:35:04.004489 systemd[1]: Starting kubelet.service... Sep 13 01:35:04.242980 systemd[1]: Started kubelet.service. Sep 13 01:35:04.346680 kubelet[2605]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:35:04.346680 kubelet[2605]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 01:35:04.346680 kubelet[2605]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:35:04.348634 kubelet[2605]: I0913 01:35:04.348559 2605 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:35:04.365106 kubelet[2605]: I0913 01:35:04.365017 2605 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 01:35:04.365106 kubelet[2605]: I0913 01:35:04.365048 2605 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:35:04.365359 kubelet[2605]: I0913 01:35:04.365333 2605 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 01:35:04.366856 kubelet[2605]: I0913 01:35:04.366804 2605 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 01:35:04.369496 kubelet[2605]: I0913 01:35:04.369463 2605 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:35:04.377325 kubelet[2605]: E0913 01:35:04.377264 2605 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:35:04.377325 kubelet[2605]: I0913 01:35:04.377319 2605 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:35:04.383019 kubelet[2605]: I0913 01:35:04.382987 2605 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:35:04.383419 kubelet[2605]: I0913 01:35:04.383401 2605 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 01:35:04.383538 kubelet[2605]: I0913 01:35:04.383507 2605 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:35:04.383703 kubelet[2605]: I0913 01:35:04.383536 2605 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-8d5f1b2fe1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 01:35:04.383789 kubelet[2605]: I0913 01:35:04.383707 2605 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:35:04.383789 kubelet[2605]: I0913 01:35:04.383718 2605 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 01:35:04.383789 kubelet[2605]: I0913 01:35:04.383752 2605 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:35:04.383876 kubelet[2605]: I0913 01:35:04.383848 2605 kubelet.go:408] "Attempting to sync node with API server" Sep 13 01:35:04.383876 kubelet[2605]: I0913 01:35:04.383859 2605 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:35:04.386375 kubelet[2605]: I0913 01:35:04.386351 2605 kubelet.go:314] "Adding apiserver pod source" Sep 13 01:35:04.386446 kubelet[2605]: I0913 01:35:04.386410 2605 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:35:04.389764 kubelet[2605]: I0913 01:35:04.388759 2605 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:35:04.389764 kubelet[2605]: I0913 01:35:04.389291 2605 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:35:04.391436 kubelet[2605]: I0913 01:35:04.391415 2605 server.go:1274] "Started kubelet" Sep 13 01:35:04.401044 kubelet[2605]: I0913 01:35:04.400855 2605 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:35:04.402678 kubelet[2605]: I0913 01:35:04.401639 2605 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:35:04.402678 kubelet[2605]: I0913 01:35:04.402031 2605 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:35:04.402678 kubelet[2605]: I0913 01:35:04.402291 2605 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:35:04.402678 kubelet[2605]: I0913 01:35:04.402536 2605 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:35:04.403276 kubelet[2605]: I0913 01:35:04.403216 2605 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 01:35:04.403363 kubelet[2605]: E0913 01:35:04.403351 2605 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-8d5f1b2fe1\" not found" Sep 13 01:35:04.405031 kubelet[2605]: I0913 01:35:04.403900 2605 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 01:35:04.405031 kubelet[2605]: I0913 01:35:04.404033 2605 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:35:04.430423 kubelet[2605]: I0913 01:35:04.428521 2605 server.go:449] "Adding debug handlers to kubelet server" Sep 13 01:35:04.441265 kubelet[2605]: I0913 01:35:04.441225 2605 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:35:04.455345 kubelet[2605]: I0913 01:35:04.455308 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:35:04.456434 kubelet[2605]: I0913 01:35:04.456413 2605 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:35:04.456565 kubelet[2605]: I0913 01:35:04.456555 2605 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:35:04.456779 kubelet[2605]: I0913 01:35:04.456763 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:35:04.456851 kubelet[2605]: I0913 01:35:04.456842 2605 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 01:35:04.456919 kubelet[2605]: I0913 01:35:04.456910 2605 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 01:35:04.457026 kubelet[2605]: E0913 01:35:04.457009 2605 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:35:04.465378 kubelet[2605]: E0913 01:35:04.465351 2605 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:35:04.525755 kubelet[2605]: I0913 01:35:04.525658 2605 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 01:35:04.526082 kubelet[2605]: I0913 01:35:04.526038 2605 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 01:35:04.526319 kubelet[2605]: I0913 01:35:04.526306 2605 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:35:04.526813 kubelet[2605]: I0913 01:35:04.526795 2605 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 01:35:04.526904 kubelet[2605]: I0913 01:35:04.526880 2605 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 01:35:04.526968 kubelet[2605]: I0913 01:35:04.526960 2605 policy_none.go:49] "None policy: Start" Sep 13 01:35:04.528142 kubelet[2605]: I0913 01:35:04.528126 2605 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 01:35:04.528249 kubelet[2605]: I0913 01:35:04.528239 2605 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:35:04.528475 kubelet[2605]: I0913 01:35:04.528463 2605 state_mem.go:75] "Updated machine memory state" Sep 13 01:35:04.529702 kubelet[2605]: I0913 01:35:04.529683 2605 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:35:04.529938 kubelet[2605]: I0913 01:35:04.529926 2605 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:35:04.530030 kubelet[2605]: I0913 01:35:04.530001 2605 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:35:04.531640 kubelet[2605]: I0913 01:35:04.531623 2605 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:35:04.568616 kubelet[2605]: W0913 01:35:04.568591 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:35:04.578627 kubelet[2605]: W0913 01:35:04.578591 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:35:04.578792 kubelet[2605]: W0913 01:35:04.578534 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:35:04.641215 kubelet[2605]: I0913 01:35:04.641166 2605 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.656236 kubelet[2605]: I0913 01:35:04.656202 2605 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.656421 kubelet[2605]: I0913 01:35:04.656295 2605 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.705612 kubelet[2605]: I0913 01:35:04.705577 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ea3fc5c5bbfc9c1b55576c51d28205c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"7ea3fc5c5bbfc9c1b55576c51d28205c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.705866 kubelet[2605]: I0913 01:35:04.705838 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8ac257fe99d39c2b070b4898a4e95ca0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8ac257fe99d39c2b070b4898a4e95ca0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.705974 kubelet[2605]: I0913 01:35:04.705961 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ac257fe99d39c2b070b4898a4e95ca0-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8ac257fe99d39c2b070b4898a4e95ca0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.706074 kubelet[2605]: I0913 01:35:04.706062 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ac257fe99d39c2b070b4898a4e95ca0-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8ac257fe99d39c2b070b4898a4e95ca0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.706179 kubelet[2605]: I0913 01:35:04.706164 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ac257fe99d39c2b070b4898a4e95ca0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8ac257fe99d39c2b070b4898a4e95ca0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.706276 kubelet[2605]: I0913 01:35:04.706263 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ea3fc5c5bbfc9c1b55576c51d28205c-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"7ea3fc5c5bbfc9c1b55576c51d28205c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.706367 kubelet[2605]: I0913 01:35:04.706354 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ea3fc5c5bbfc9c1b55576c51d28205c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"7ea3fc5c5bbfc9c1b55576c51d28205c\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.706477 kubelet[2605]: I0913 01:35:04.706463 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ac257fe99d39c2b070b4898a4e95ca0-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8ac257fe99d39c2b070b4898a4e95ca0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.706569 kubelet[2605]: I0913 01:35:04.706556 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8edd83e53ac08c0d41e7b31c57a51432-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-8d5f1b2fe1\" (UID: \"8edd83e53ac08c0d41e7b31c57a51432\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:04.801052 sudo[2637]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 01:35:04.801280 sudo[2637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 01:35:05.284992 sudo[2637]: pam_unix(sudo:session): session closed for user root Sep 13 01:35:05.388278 kubelet[2605]: I0913 01:35:05.388228 2605 apiserver.go:52] "Watching apiserver" Sep 13 01:35:05.404294 kubelet[2605]: I0913 01:35:05.404224 2605 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 01:35:05.502842 kubelet[2605]: W0913 01:35:05.502810 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:35:05.502961 kubelet[2605]: E0913 01:35:05.502893 2605 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:05.503338 kubelet[2605]: W0913 01:35:05.503320 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:35:05.503407 kubelet[2605]: E0913 01:35:05.503363 2605 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1" Sep 13 01:35:05.534311 kubelet[2605]: I0913 01:35:05.534253 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8d5f1b2fe1" podStartSLOduration=1.534224265 podStartE2EDuration="1.534224265s" podCreationTimestamp="2025-09-13 01:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:05.520638995 +0000 UTC m=+1.269457919" watchObservedRunningTime="2025-09-13 01:35:05.534224265 +0000 UTC m=+1.283043189" Sep 13 01:35:05.548258 kubelet[2605]: I0913 01:35:05.548201 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8d5f1b2fe1" podStartSLOduration=1.548119184 podStartE2EDuration="1.548119184s" podCreationTimestamp="2025-09-13 01:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:05.547878817 +0000 UTC m=+1.296697741" watchObservedRunningTime="2025-09-13 01:35:05.548119184 +0000 UTC m=+1.296938108" Sep 13 01:35:05.548630 kubelet[2605]: I0913 01:35:05.548588 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-8d5f1b2fe1" podStartSLOduration=1.548578317 podStartE2EDuration="1.548578317s" podCreationTimestamp="2025-09-13 01:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:05.535317216 +0000 UTC m=+1.284136140" watchObservedRunningTime="2025-09-13 01:35:05.548578317 +0000 UTC m=+1.297397241" Sep 13 01:35:07.436013 sudo[1921]: pam_unix(sudo:session): session closed for user root Sep 13 01:35:07.526614 sshd[1902]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:07.529315 systemd[1]: sshd@4-10.200.20.20:22-10.200.16.10:58044.service: Deactivated successfully. Sep 13 01:35:07.530315 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 01:35:07.530332 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Sep 13 01:35:07.531757 systemd-logind[1563]: Removed session 7. Sep 13 01:35:08.883873 kubelet[2605]: I0913 01:35:08.883827 2605 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 01:35:08.884246 env[1591]: time="2025-09-13T01:35:08.884125916Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 01:35:08.884636 kubelet[2605]: I0913 01:35:08.884606 2605 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 01:35:09.836219 kubelet[2605]: I0913 01:35:09.836177 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af957315-9ab1-4892-94b0-cb23933cbd65-lib-modules\") pod \"kube-proxy-pbfzw\" (UID: \"af957315-9ab1-4892-94b0-cb23933cbd65\") " pod="kube-system/kube-proxy-pbfzw" Sep 13 01:35:09.836378 kubelet[2605]: I0913 01:35:09.836252 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-config-path\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836378 kubelet[2605]: I0913 01:35:09.836276 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxkkf\" (UniqueName: \"kubernetes.io/projected/af957315-9ab1-4892-94b0-cb23933cbd65-kube-api-access-sxkkf\") pod \"kube-proxy-pbfzw\" (UID: \"af957315-9ab1-4892-94b0-cb23933cbd65\") " pod="kube-system/kube-proxy-pbfzw" Sep 13 01:35:09.836378 kubelet[2605]: I0913 01:35:09.836295 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-host-proc-sys-kernel\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836378 kubelet[2605]: I0913 01:35:09.836349 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-cgroup\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836378 kubelet[2605]: I0913 01:35:09.836363 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-host-proc-sys-net\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836535 kubelet[2605]: I0913 01:35:09.836425 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af957315-9ab1-4892-94b0-cb23933cbd65-xtables-lock\") pod \"kube-proxy-pbfzw\" (UID: \"af957315-9ab1-4892-94b0-cb23933cbd65\") " pod="kube-system/kube-proxy-pbfzw" Sep 13 01:35:09.836535 kubelet[2605]: I0913 01:35:09.836443 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-run\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836535 kubelet[2605]: I0913 01:35:09.836458 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-clustermesh-secrets\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836535 kubelet[2605]: I0913 01:35:09.836494 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-etc-cni-netd\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836535 kubelet[2605]: I0913 01:35:09.836517 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-hubble-tls\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836637 kubelet[2605]: I0913 01:35:09.836532 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-bpf-maps\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836637 kubelet[2605]: I0913 01:35:09.836585 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cni-path\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836637 kubelet[2605]: I0913 01:35:09.836601 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-lib-modules\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836699 kubelet[2605]: I0913 01:35:09.836639 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af957315-9ab1-4892-94b0-cb23933cbd65-kube-proxy\") pod \"kube-proxy-pbfzw\" (UID: \"af957315-9ab1-4892-94b0-cb23933cbd65\") " pod="kube-system/kube-proxy-pbfzw" Sep 13 01:35:09.836699 kubelet[2605]: I0913 01:35:09.836656 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trxxs\" (UniqueName: \"kubernetes.io/projected/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-kube-api-access-trxxs\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836699 kubelet[2605]: I0913 01:35:09.836675 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-hostproc\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.836777 kubelet[2605]: I0913 01:35:09.836714 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-xtables-lock\") pod \"cilium-q8jg5\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " pod="kube-system/cilium-q8jg5" Sep 13 01:35:09.940129 kubelet[2605]: I0913 01:35:09.940078 2605 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 01:35:10.038287 kubelet[2605]: I0913 01:35:10.038254 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhhvg\" (UniqueName: \"kubernetes.io/projected/296ed9b3-bae5-467d-9626-f67b8f8b6e4a-kube-api-access-qhhvg\") pod \"cilium-operator-5d85765b45-wdwwp\" (UID: \"296ed9b3-bae5-467d-9626-f67b8f8b6e4a\") " pod="kube-system/cilium-operator-5d85765b45-wdwwp" Sep 13 01:35:10.038516 kubelet[2605]: I0913 01:35:10.038497 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/296ed9b3-bae5-467d-9626-f67b8f8b6e4a-cilium-config-path\") pod \"cilium-operator-5d85765b45-wdwwp\" (UID: \"296ed9b3-bae5-467d-9626-f67b8f8b6e4a\") " pod="kube-system/cilium-operator-5d85765b45-wdwwp" Sep 13 01:35:10.047351 env[1591]: time="2025-09-13T01:35:10.047309227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8jg5,Uid:cf6c76e8-917c-4f0c-9e0b-63aa17d2c380,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:10.064370 env[1591]: time="2025-09-13T01:35:10.064103373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbfzw,Uid:af957315-9ab1-4892-94b0-cb23933cbd65,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:10.085768 env[1591]: time="2025-09-13T01:35:10.085696800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:10.085768 env[1591]: time="2025-09-13T01:35:10.085737481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:10.086070 env[1591]: time="2025-09-13T01:35:10.085749082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:10.086070 env[1591]: time="2025-09-13T01:35:10.086003688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d pid=2687 runtime=io.containerd.runc.v2 Sep 13 01:35:10.124702 env[1591]: time="2025-09-13T01:35:10.123519360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8jg5,Uid:cf6c76e8-917c-4f0c-9e0b-63aa17d2c380,Namespace:kube-system,Attempt:0,} returns sandbox id \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\"" Sep 13 01:35:10.126900 env[1591]: time="2025-09-13T01:35:10.126066224Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 01:35:10.133499 env[1591]: time="2025-09-13T01:35:10.133416931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:10.133499 env[1591]: time="2025-09-13T01:35:10.133460652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:10.133499 env[1591]: time="2025-09-13T01:35:10.133477452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:10.133886 env[1591]: time="2025-09-13T01:35:10.133848301Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69e77e090207672cc617cc3ef98092e442f2bf016b53d464fb6dd63af15750a5 pid=2730 runtime=io.containerd.runc.v2 Sep 13 01:35:10.178835 env[1591]: time="2025-09-13T01:35:10.178786201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbfzw,Uid:af957315-9ab1-4892-94b0-cb23933cbd65,Namespace:kube-system,Attempt:0,} returns sandbox id \"69e77e090207672cc617cc3ef98092e442f2bf016b53d464fb6dd63af15750a5\"" Sep 13 01:35:10.182780 env[1591]: time="2025-09-13T01:35:10.182737301Z" level=info msg="CreateContainer within sandbox \"69e77e090207672cc617cc3ef98092e442f2bf016b53d464fb6dd63af15750a5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 01:35:10.225631 env[1591]: time="2025-09-13T01:35:10.225577908Z" level=info msg="CreateContainer within sandbox \"69e77e090207672cc617cc3ef98092e442f2bf016b53d464fb6dd63af15750a5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a3843d76b810878e1a7df0315f3cf0e4707d89559f679a08a49049f01f97e987\"" Sep 13 01:35:10.228608 env[1591]: time="2025-09-13T01:35:10.228566584Z" level=info msg="StartContainer for \"a3843d76b810878e1a7df0315f3cf0e4707d89559f679a08a49049f01f97e987\"" Sep 13 01:35:10.262354 env[1591]: time="2025-09-13T01:35:10.262298799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wdwwp,Uid:296ed9b3-bae5-467d-9626-f67b8f8b6e4a,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:10.296026 env[1591]: time="2025-09-13T01:35:10.295971773Z" level=info msg="StartContainer for \"a3843d76b810878e1a7df0315f3cf0e4707d89559f679a08a49049f01f97e987\" returns successfully" Sep 13 01:35:10.320780 env[1591]: time="2025-09-13T01:35:10.320583918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:10.320780 env[1591]: time="2025-09-13T01:35:10.320624959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:10.320780 env[1591]: time="2025-09-13T01:35:10.320635359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:10.321798 env[1591]: time="2025-09-13T01:35:10.321191413Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02 pid=2806 runtime=io.containerd.runc.v2 Sep 13 01:35:10.375327 env[1591]: time="2025-09-13T01:35:10.374647409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wdwwp,Uid:296ed9b3-bae5-467d-9626-f67b8f8b6e4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\"" Sep 13 01:35:10.551130 kubelet[2605]: I0913 01:35:10.551074 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pbfzw" podStartSLOduration=1.551054963 podStartE2EDuration="1.551054963s" podCreationTimestamp="2025-09-13 01:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:10.51110971 +0000 UTC m=+6.259928634" watchObservedRunningTime="2025-09-13 01:35:10.551054963 +0000 UTC m=+6.299873887" Sep 13 01:35:15.426037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount153465846.mount: Deactivated successfully. Sep 13 01:35:18.018126 env[1591]: time="2025-09-13T01:35:18.018080913Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:18.025374 env[1591]: time="2025-09-13T01:35:18.025336386Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:18.029683 env[1591]: time="2025-09-13T01:35:18.029635476Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:18.030331 env[1591]: time="2025-09-13T01:35:18.030300330Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 01:35:18.032055 env[1591]: time="2025-09-13T01:35:18.032021966Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 01:35:18.034320 env[1591]: time="2025-09-13T01:35:18.033326074Z" level=info msg="CreateContainer within sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:35:18.059160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount942723396.mount: Deactivated successfully. Sep 13 01:35:18.068515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659777980.mount: Deactivated successfully. Sep 13 01:35:18.080026 env[1591]: time="2025-09-13T01:35:18.079971135Z" level=info msg="CreateContainer within sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a\"" Sep 13 01:35:18.082051 env[1591]: time="2025-09-13T01:35:18.081590249Z" level=info msg="StartContainer for \"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a\"" Sep 13 01:35:18.133851 env[1591]: time="2025-09-13T01:35:18.133791787Z" level=info msg="StartContainer for \"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a\" returns successfully" Sep 13 01:35:19.056492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a-rootfs.mount: Deactivated successfully. Sep 13 01:35:19.616748 env[1591]: time="2025-09-13T01:35:19.616536726Z" level=info msg="shim disconnected" id=d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a Sep 13 01:35:19.616748 env[1591]: time="2025-09-13T01:35:19.616594128Z" level=warning msg="cleaning up after shim disconnected" id=d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a namespace=k8s.io Sep 13 01:35:19.616748 env[1591]: time="2025-09-13T01:35:19.616605368Z" level=info msg="cleaning up dead shim" Sep 13 01:35:19.624525 env[1591]: time="2025-09-13T01:35:19.624481570Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3016 runtime=io.containerd.runc.v2\n" Sep 13 01:35:20.532060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1525032988.mount: Deactivated successfully. Sep 13 01:35:20.537406 env[1591]: time="2025-09-13T01:35:20.534509522Z" level=info msg="CreateContainer within sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:35:20.664372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1312254112.mount: Deactivated successfully. Sep 13 01:35:20.682105 env[1591]: time="2025-09-13T01:35:20.682052450Z" level=info msg="CreateContainer within sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c\"" Sep 13 01:35:20.683837 env[1591]: time="2025-09-13T01:35:20.683584920Z" level=info msg="StartContainer for \"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c\"" Sep 13 01:35:20.733545 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:35:20.733863 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:35:20.734058 systemd[1]: Stopping systemd-sysctl.service... Sep 13 01:35:20.735715 env[1591]: time="2025-09-13T01:35:20.735677208Z" level=info msg="StartContainer for \"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c\" returns successfully" Sep 13 01:35:20.736822 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:35:20.750923 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:35:20.779639 env[1591]: time="2025-09-13T01:35:20.779590851Z" level=info msg="shim disconnected" id=a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c Sep 13 01:35:20.779639 env[1591]: time="2025-09-13T01:35:20.779634172Z" level=warning msg="cleaning up after shim disconnected" id=a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c namespace=k8s.io Sep 13 01:35:20.779639 env[1591]: time="2025-09-13T01:35:20.779644972Z" level=info msg="cleaning up dead shim" Sep 13 01:35:20.786680 env[1591]: time="2025-09-13T01:35:20.786630913Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3082 runtime=io.containerd.runc.v2\n" Sep 13 01:35:21.290747 env[1591]: time="2025-09-13T01:35:21.290703723Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:21.298795 env[1591]: time="2025-09-13T01:35:21.298757042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:21.303557 env[1591]: time="2025-09-13T01:35:21.303508615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:21.304160 env[1591]: time="2025-09-13T01:35:21.304130988Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 01:35:21.308029 env[1591]: time="2025-09-13T01:35:21.307985663Z" level=info msg="CreateContainer within sandbox \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 01:35:21.338961 env[1591]: time="2025-09-13T01:35:21.338919832Z" level=info msg="CreateContainer within sandbox \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\"" Sep 13 01:35:21.341106 env[1591]: time="2025-09-13T01:35:21.341070234Z" level=info msg="StartContainer for \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\"" Sep 13 01:35:21.384925 env[1591]: time="2025-09-13T01:35:21.384856536Z" level=info msg="StartContainer for \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\" returns successfully" Sep 13 01:35:21.522541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1271708200.mount: Deactivated successfully. Sep 13 01:35:21.556857 env[1591]: time="2025-09-13T01:35:21.556817359Z" level=info msg="CreateContainer within sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:35:21.612438 env[1591]: time="2025-09-13T01:35:21.612355211Z" level=info msg="CreateContainer within sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98\"" Sep 13 01:35:21.613152 env[1591]: time="2025-09-13T01:35:21.613119026Z" level=info msg="StartContainer for \"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98\"" Sep 13 01:35:21.640732 kubelet[2605]: I0913 01:35:21.640223 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-wdwwp" podStartSLOduration=1.713124767 podStartE2EDuration="12.640202919s" podCreationTimestamp="2025-09-13 01:35:09 +0000 UTC" firstStartedPulling="2025-09-13 01:35:10.378221579 +0000 UTC m=+6.127040503" lastFinishedPulling="2025-09-13 01:35:21.305299731 +0000 UTC m=+17.054118655" observedRunningTime="2025-09-13 01:35:21.582755789 +0000 UTC m=+17.331574713" watchObservedRunningTime="2025-09-13 01:35:21.640202919 +0000 UTC m=+17.389021843" Sep 13 01:35:21.694289 env[1591]: time="2025-09-13T01:35:21.694249182Z" level=info msg="StartContainer for \"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98\" returns successfully" Sep 13 01:35:22.059486 env[1591]: time="2025-09-13T01:35:22.059437462Z" level=info msg="shim disconnected" id=2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98 Sep 13 01:35:22.059770 env[1591]: time="2025-09-13T01:35:22.059750308Z" level=warning msg="cleaning up after shim disconnected" id=2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98 namespace=k8s.io Sep 13 01:35:22.059837 env[1591]: time="2025-09-13T01:35:22.059823189Z" level=info msg="cleaning up dead shim" Sep 13 01:35:22.078962 env[1591]: time="2025-09-13T01:35:22.078920517Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3177 runtime=io.containerd.runc.v2\n" Sep 13 01:35:22.521799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98-rootfs.mount: Deactivated successfully. Sep 13 01:35:22.554763 env[1591]: time="2025-09-13T01:35:22.554130703Z" level=info msg="CreateContainer within sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:35:22.582160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493576042.mount: Deactivated successfully. Sep 13 01:35:22.590615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2748312068.mount: Deactivated successfully. Sep 13 01:35:22.606442 env[1591]: time="2025-09-13T01:35:22.603562454Z" level=info msg="CreateContainer within sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db\"" Sep 13 01:35:22.607227 env[1591]: time="2025-09-13T01:35:22.607196164Z" level=info msg="StartContainer for \"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db\"" Sep 13 01:35:22.659130 env[1591]: time="2025-09-13T01:35:22.659084763Z" level=info msg="StartContainer for \"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db\" returns successfully" Sep 13 01:35:22.684712 env[1591]: time="2025-09-13T01:35:22.684652735Z" level=info msg="shim disconnected" id=0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db Sep 13 01:35:22.685032 env[1591]: time="2025-09-13T01:35:22.684990902Z" level=warning msg="cleaning up after shim disconnected" id=0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db namespace=k8s.io Sep 13 01:35:22.685121 env[1591]: time="2025-09-13T01:35:22.685107304Z" level=info msg="cleaning up dead shim" Sep 13 01:35:22.691524 env[1591]: time="2025-09-13T01:35:22.691480547Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3233 runtime=io.containerd.runc.v2\n" Sep 13 01:35:23.556653 env[1591]: time="2025-09-13T01:35:23.556612049Z" level=info msg="CreateContainer within sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:35:23.595990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4210145991.mount: Deactivated successfully. Sep 13 01:35:23.608990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028613586.mount: Deactivated successfully. Sep 13 01:35:23.619172 env[1591]: time="2025-09-13T01:35:23.619062945Z" level=info msg="CreateContainer within sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\"" Sep 13 01:35:23.621106 env[1591]: time="2025-09-13T01:35:23.620625015Z" level=info msg="StartContainer for \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\"" Sep 13 01:35:23.679654 env[1591]: time="2025-09-13T01:35:23.679605325Z" level=info msg="StartContainer for \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\" returns successfully" Sep 13 01:35:23.759419 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:35:23.822110 kubelet[2605]: I0913 01:35:23.822009 2605 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 01:35:23.920420 kubelet[2605]: I0913 01:35:23.920366 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh9wv\" (UniqueName: \"kubernetes.io/projected/faff158e-c90a-458e-bbaf-1da35c446007-kube-api-access-lh9wv\") pod \"coredns-7c65d6cfc9-lkcn9\" (UID: \"faff158e-c90a-458e-bbaf-1da35c446007\") " pod="kube-system/coredns-7c65d6cfc9-lkcn9" Sep 13 01:35:23.920420 kubelet[2605]: I0913 01:35:23.920421 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/117bf835-9589-4ea9-baf6-5baaa5e850b5-config-volume\") pod \"coredns-7c65d6cfc9-xbxj6\" (UID: \"117bf835-9589-4ea9-baf6-5baaa5e850b5\") " pod="kube-system/coredns-7c65d6cfc9-xbxj6" Sep 13 01:35:23.920595 kubelet[2605]: I0913 01:35:23.920442 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qqn7\" (UniqueName: \"kubernetes.io/projected/117bf835-9589-4ea9-baf6-5baaa5e850b5-kube-api-access-7qqn7\") pod \"coredns-7c65d6cfc9-xbxj6\" (UID: \"117bf835-9589-4ea9-baf6-5baaa5e850b5\") " pod="kube-system/coredns-7c65d6cfc9-xbxj6" Sep 13 01:35:23.920595 kubelet[2605]: I0913 01:35:23.920474 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faff158e-c90a-458e-bbaf-1da35c446007-config-volume\") pod \"coredns-7c65d6cfc9-lkcn9\" (UID: \"faff158e-c90a-458e-bbaf-1da35c446007\") " pod="kube-system/coredns-7c65d6cfc9-lkcn9" Sep 13 01:35:24.163680 env[1591]: time="2025-09-13T01:35:24.163270131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lkcn9,Uid:faff158e-c90a-458e-bbaf-1da35c446007,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:24.172943 env[1591]: time="2025-09-13T01:35:24.172902868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xbxj6,Uid:117bf835-9589-4ea9-baf6-5baaa5e850b5,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:24.423422 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:35:24.579359 kubelet[2605]: I0913 01:35:24.579286 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q8jg5" podStartSLOduration=7.673228574 podStartE2EDuration="15.57926852s" podCreationTimestamp="2025-09-13 01:35:09 +0000 UTC" firstStartedPulling="2025-09-13 01:35:10.125679734 +0000 UTC m=+5.874498658" lastFinishedPulling="2025-09-13 01:35:18.0317196 +0000 UTC m=+13.780538604" observedRunningTime="2025-09-13 01:35:24.578932794 +0000 UTC m=+20.327751718" watchObservedRunningTime="2025-09-13 01:35:24.57926852 +0000 UTC m=+20.328087444" Sep 13 01:35:26.088767 systemd-networkd[1769]: cilium_host: Link UP Sep 13 01:35:26.089071 systemd-networkd[1769]: cilium_net: Link UP Sep 13 01:35:26.102753 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 01:35:26.102877 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 01:35:26.104356 systemd-networkd[1769]: cilium_net: Gained carrier Sep 13 01:35:26.104581 systemd-networkd[1769]: cilium_host: Gained carrier Sep 13 01:35:26.106009 systemd-networkd[1769]: cilium_net: Gained IPv6LL Sep 13 01:35:26.316554 systemd-networkd[1769]: cilium_vxlan: Link UP Sep 13 01:35:26.316559 systemd-networkd[1769]: cilium_vxlan: Gained carrier Sep 13 01:35:26.588418 kernel: NET: Registered PF_ALG protocol family Sep 13 01:35:26.818516 systemd-networkd[1769]: cilium_host: Gained IPv6LL Sep 13 01:35:27.476273 systemd-networkd[1769]: lxc_health: Link UP Sep 13 01:35:27.490760 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:35:27.491204 systemd-networkd[1769]: lxc_health: Gained carrier Sep 13 01:35:27.761891 systemd-networkd[1769]: lxccfa1490a057f: Link UP Sep 13 01:35:27.771414 kernel: eth0: renamed from tmpcabc8 Sep 13 01:35:27.782130 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccfa1490a057f: link becomes ready Sep 13 01:35:27.782586 systemd-networkd[1769]: lxccfa1490a057f: Gained carrier Sep 13 01:35:27.783524 systemd-networkd[1769]: cilium_vxlan: Gained IPv6LL Sep 13 01:35:27.792603 systemd-networkd[1769]: lxcfec2ac16dcd0: Link UP Sep 13 01:35:27.816693 kernel: eth0: renamed from tmpa3931 Sep 13 01:35:27.844897 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfec2ac16dcd0: link becomes ready Sep 13 01:35:27.844539 systemd-networkd[1769]: lxcfec2ac16dcd0: Gained carrier Sep 13 01:35:28.802496 systemd-networkd[1769]: lxc_health: Gained IPv6LL Sep 13 01:35:28.866600 systemd-networkd[1769]: lxccfa1490a057f: Gained IPv6LL Sep 13 01:35:28.994507 systemd-networkd[1769]: lxcfec2ac16dcd0: Gained IPv6LL Sep 13 01:35:31.498010 env[1591]: time="2025-09-13T01:35:31.497926815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:31.498352 env[1591]: time="2025-09-13T01:35:31.498019417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:31.498352 env[1591]: time="2025-09-13T01:35:31.498044257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:31.498352 env[1591]: time="2025-09-13T01:35:31.498207500Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cabc878ec27dd8b9f42f250f6af8003fd79815d3db5324a0acdc693e2e6a605a pid=3778 runtime=io.containerd.runc.v2 Sep 13 01:35:31.518451 env[1591]: time="2025-09-13T01:35:31.516672714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:31.518451 env[1591]: time="2025-09-13T01:35:31.516720275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:31.518451 env[1591]: time="2025-09-13T01:35:31.516731075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:31.518451 env[1591]: time="2025-09-13T01:35:31.517896414Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3931383d9281b7c263de6ad1e0eb8962edfdbc01a6c1477974562451e54927f pid=3798 runtime=io.containerd.runc.v2 Sep 13 01:35:31.586343 env[1591]: time="2025-09-13T01:35:31.586284026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lkcn9,Uid:faff158e-c90a-458e-bbaf-1da35c446007,Namespace:kube-system,Attempt:0,} returns sandbox id \"cabc878ec27dd8b9f42f250f6af8003fd79815d3db5324a0acdc693e2e6a605a\"" Sep 13 01:35:31.591452 env[1591]: time="2025-09-13T01:35:31.591080223Z" level=info msg="CreateContainer within sandbox \"cabc878ec27dd8b9f42f250f6af8003fd79815d3db5324a0acdc693e2e6a605a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:35:31.622530 env[1591]: time="2025-09-13T01:35:31.622474564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xbxj6,Uid:117bf835-9589-4ea9-baf6-5baaa5e850b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3931383d9281b7c263de6ad1e0eb8962edfdbc01a6c1477974562451e54927f\"" Sep 13 01:35:31.628899 env[1591]: time="2025-09-13T01:35:31.628857906Z" level=info msg="CreateContainer within sandbox \"a3931383d9281b7c263de6ad1e0eb8962edfdbc01a6c1477974562451e54927f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:35:31.658419 env[1591]: time="2025-09-13T01:35:31.658356537Z" level=info msg="CreateContainer within sandbox \"cabc878ec27dd8b9f42f250f6af8003fd79815d3db5324a0acdc693e2e6a605a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"768c79d8f2b04dadf149655731c3df7f6973b9ca2d15480643f675bb8758af3b\"" Sep 13 01:35:31.659121 env[1591]: time="2025-09-13T01:35:31.659094709Z" level=info msg="StartContainer for \"768c79d8f2b04dadf149655731c3df7f6973b9ca2d15480643f675bb8758af3b\"" Sep 13 01:35:31.684593 env[1591]: time="2025-09-13T01:35:31.684538035Z" level=info msg="CreateContainer within sandbox \"a3931383d9281b7c263de6ad1e0eb8962edfdbc01a6c1477974562451e54927f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1000cbd8031be83705271e8d73de3f28f305909740141666be574f6b2aea203b\"" Sep 13 01:35:31.689418 env[1591]: time="2025-09-13T01:35:31.687621324Z" level=info msg="StartContainer for \"1000cbd8031be83705271e8d73de3f28f305909740141666be574f6b2aea203b\"" Sep 13 01:35:31.722891 env[1591]: time="2025-09-13T01:35:31.722827087Z" level=info msg="StartContainer for \"768c79d8f2b04dadf149655731c3df7f6973b9ca2d15480643f675bb8758af3b\" returns successfully" Sep 13 01:35:31.761202 env[1591]: time="2025-09-13T01:35:31.761090498Z" level=info msg="StartContainer for \"1000cbd8031be83705271e8d73de3f28f305909740141666be574f6b2aea203b\" returns successfully" Sep 13 01:35:32.502418 systemd[1]: run-containerd-runc-k8s.io-a3931383d9281b7c263de6ad1e0eb8962edfdbc01a6c1477974562451e54927f-runc.tzCcPI.mount: Deactivated successfully. Sep 13 01:35:32.596898 kubelet[2605]: I0913 01:35:32.596838 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lkcn9" podStartSLOduration=23.596821939 podStartE2EDuration="23.596821939s" podCreationTimestamp="2025-09-13 01:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:32.596442053 +0000 UTC m=+28.345260977" watchObservedRunningTime="2025-09-13 01:35:32.596821939 +0000 UTC m=+28.345640823" Sep 13 01:35:32.636571 kubelet[2605]: I0913 01:35:32.636512 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xbxj6" podStartSLOduration=23.63649404 podStartE2EDuration="23.63649404s" podCreationTimestamp="2025-09-13 01:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:32.635291502 +0000 UTC m=+28.384110426" watchObservedRunningTime="2025-09-13 01:35:32.63649404 +0000 UTC m=+28.385312964" Sep 13 01:35:37.321931 kubelet[2605]: I0913 01:35:37.321891 2605 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 01:37:38.862570 systemd[1]: Started sshd@5-10.200.20.20:22-10.200.16.10:40508.service. Sep 13 01:37:39.277874 sshd[3945]: Accepted publickey for core from 10.200.16.10 port 40508 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:39.279583 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:39.283067 systemd-logind[1563]: New session 8 of user core. Sep 13 01:37:39.283952 systemd[1]: Started session-8.scope. Sep 13 01:37:39.687094 sshd[3945]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:39.689634 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Sep 13 01:37:39.689857 systemd[1]: sshd@5-10.200.20.20:22-10.200.16.10:40508.service: Deactivated successfully. Sep 13 01:37:39.690657 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 01:37:39.691075 systemd-logind[1563]: Removed session 8. Sep 13 01:37:44.755255 systemd[1]: Started sshd@6-10.200.20.20:22-10.200.16.10:60406.service. Sep 13 01:37:45.168942 sshd[3965]: Accepted publickey for core from 10.200.16.10 port 60406 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:45.170667 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:45.174438 systemd-logind[1563]: New session 9 of user core. Sep 13 01:37:45.174979 systemd[1]: Started session-9.scope. Sep 13 01:37:45.537487 sshd[3965]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:45.540148 systemd[1]: sshd@6-10.200.20.20:22-10.200.16.10:60406.service: Deactivated successfully. Sep 13 01:37:45.541559 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 01:37:45.542127 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Sep 13 01:37:45.542903 systemd-logind[1563]: Removed session 9. Sep 13 01:37:50.603842 systemd[1]: Started sshd@7-10.200.20.20:22-10.200.16.10:40266.service. Sep 13 01:37:51.023659 sshd[3979]: Accepted publickey for core from 10.200.16.10 port 40266 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:51.024770 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:51.028832 systemd-logind[1563]: New session 10 of user core. Sep 13 01:37:51.029254 systemd[1]: Started session-10.scope. Sep 13 01:37:51.394648 sshd[3979]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:51.397500 systemd[1]: sshd@7-10.200.20.20:22-10.200.16.10:40266.service: Deactivated successfully. Sep 13 01:37:51.398242 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 01:37:51.398825 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Sep 13 01:37:51.399616 systemd-logind[1563]: Removed session 10. Sep 13 01:37:56.462491 systemd[1]: Started sshd@8-10.200.20.20:22-10.200.16.10:40278.service. Sep 13 01:37:56.874487 sshd[3993]: Accepted publickey for core from 10.200.16.10 port 40278 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:56.875876 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:56.880328 systemd[1]: Started session-11.scope. Sep 13 01:37:56.881278 systemd-logind[1563]: New session 11 of user core. Sep 13 01:37:57.233364 sshd[3993]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:57.236278 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Sep 13 01:37:57.237059 systemd[1]: sshd@8-10.200.20.20:22-10.200.16.10:40278.service: Deactivated successfully. Sep 13 01:37:57.237885 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 01:37:57.238622 systemd-logind[1563]: Removed session 11. Sep 13 01:38:02.300288 systemd[1]: Started sshd@9-10.200.20.20:22-10.200.16.10:46920.service. Sep 13 01:38:02.713298 sshd[4006]: Accepted publickey for core from 10.200.16.10 port 46920 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:02.715082 sshd[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:02.719549 systemd[1]: Started session-12.scope. Sep 13 01:38:02.720035 systemd-logind[1563]: New session 12 of user core. Sep 13 01:38:03.072253 sshd[4006]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:03.074790 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Sep 13 01:38:03.074942 systemd[1]: sshd@9-10.200.20.20:22-10.200.16.10:46920.service: Deactivated successfully. Sep 13 01:38:03.075764 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 01:38:03.076205 systemd-logind[1563]: Removed session 12. Sep 13 01:38:03.140791 systemd[1]: Started sshd@10-10.200.20.20:22-10.200.16.10:46932.service. Sep 13 01:38:03.551828 sshd[4019]: Accepted publickey for core from 10.200.16.10 port 46932 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:03.553441 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:03.557228 systemd-logind[1563]: New session 13 of user core. Sep 13 01:38:03.557701 systemd[1]: Started session-13.scope. Sep 13 01:38:03.978063 sshd[4019]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:03.980726 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Sep 13 01:38:03.980946 systemd[1]: sshd@10-10.200.20.20:22-10.200.16.10:46932.service: Deactivated successfully. Sep 13 01:38:03.981753 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 01:38:03.982192 systemd-logind[1563]: Removed session 13. Sep 13 01:38:04.044435 systemd[1]: Started sshd@11-10.200.20.20:22-10.200.16.10:46934.service. Sep 13 01:38:04.460267 sshd[4031]: Accepted publickey for core from 10.200.16.10 port 46934 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:04.462488 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:04.466330 systemd-logind[1563]: New session 14 of user core. Sep 13 01:38:04.466840 systemd[1]: Started session-14.scope. Sep 13 01:38:04.833864 sshd[4031]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:04.836294 systemd[1]: sshd@11-10.200.20.20:22-10.200.16.10:46934.service: Deactivated successfully. Sep 13 01:38:04.837304 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 01:38:04.837660 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Sep 13 01:38:04.838788 systemd-logind[1563]: Removed session 14. Sep 13 01:38:09.900689 systemd[1]: Started sshd@12-10.200.20.20:22-10.200.16.10:33764.service. Sep 13 01:38:10.319142 sshd[4045]: Accepted publickey for core from 10.200.16.10 port 33764 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:10.319924 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:10.324552 systemd[1]: Started session-15.scope. Sep 13 01:38:10.324893 systemd-logind[1563]: New session 15 of user core. Sep 13 01:38:10.688598 sshd[4045]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:10.691415 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Sep 13 01:38:10.692633 systemd[1]: sshd@12-10.200.20.20:22-10.200.16.10:33764.service: Deactivated successfully. Sep 13 01:38:10.693468 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 01:38:10.694892 systemd-logind[1563]: Removed session 15. Sep 13 01:38:15.755250 systemd[1]: Started sshd@13-10.200.20.20:22-10.200.16.10:33778.service. Sep 13 01:38:16.167047 sshd[4060]: Accepted publickey for core from 10.200.16.10 port 33778 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:16.168833 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:16.173590 systemd[1]: Started session-16.scope. Sep 13 01:38:16.173949 systemd-logind[1563]: New session 16 of user core. Sep 13 01:38:16.537305 sshd[4060]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:16.540508 systemd[1]: sshd@13-10.200.20.20:22-10.200.16.10:33778.service: Deactivated successfully. Sep 13 01:38:16.541434 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 01:38:16.542596 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Sep 13 01:38:16.543330 systemd-logind[1563]: Removed session 16. Sep 13 01:38:16.608918 systemd[1]: Started sshd@14-10.200.20.20:22-10.200.16.10:33792.service. Sep 13 01:38:17.019878 sshd[4072]: Accepted publickey for core from 10.200.16.10 port 33792 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:17.021526 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:17.025806 systemd[1]: Started session-17.scope. Sep 13 01:38:17.026455 systemd-logind[1563]: New session 17 of user core. Sep 13 01:38:17.415199 sshd[4072]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:17.417811 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Sep 13 01:38:17.417964 systemd[1]: sshd@14-10.200.20.20:22-10.200.16.10:33792.service: Deactivated successfully. Sep 13 01:38:17.418802 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 01:38:17.419229 systemd-logind[1563]: Removed session 17. Sep 13 01:38:17.481836 systemd[1]: Started sshd@15-10.200.20.20:22-10.200.16.10:33798.service. Sep 13 01:38:17.898507 sshd[4082]: Accepted publickey for core from 10.200.16.10 port 33798 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:17.899936 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:17.907672 systemd[1]: Started session-18.scope. Sep 13 01:38:17.908440 systemd-logind[1563]: New session 18 of user core. Sep 13 01:38:19.269479 sshd[4082]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:19.272784 systemd[1]: sshd@15-10.200.20.20:22-10.200.16.10:33798.service: Deactivated successfully. Sep 13 01:38:19.273484 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Sep 13 01:38:19.273589 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 01:38:19.274713 systemd-logind[1563]: Removed session 18. Sep 13 01:38:19.337948 systemd[1]: Started sshd@16-10.200.20.20:22-10.200.16.10:33812.service. Sep 13 01:38:19.753963 sshd[4100]: Accepted publickey for core from 10.200.16.10 port 33812 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:19.754487 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:19.758835 systemd[1]: Started session-19.scope. Sep 13 01:38:19.759118 systemd-logind[1563]: New session 19 of user core. Sep 13 01:38:20.232496 sshd[4100]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:20.235116 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Sep 13 01:38:20.235259 systemd[1]: sshd@16-10.200.20.20:22-10.200.16.10:33812.service: Deactivated successfully. Sep 13 01:38:20.236050 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 01:38:20.236507 systemd-logind[1563]: Removed session 19. Sep 13 01:38:20.301198 systemd[1]: Started sshd@17-10.200.20.20:22-10.200.16.10:56064.service. Sep 13 01:38:20.715331 sshd[4110]: Accepted publickey for core from 10.200.16.10 port 56064 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:20.718259 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:20.722160 systemd-logind[1563]: New session 20 of user core. Sep 13 01:38:20.722861 systemd[1]: Started session-20.scope. Sep 13 01:38:21.098603 sshd[4110]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:21.101236 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Sep 13 01:38:21.101543 systemd[1]: sshd@17-10.200.20.20:22-10.200.16.10:56064.service: Deactivated successfully. Sep 13 01:38:21.102305 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 01:38:21.103362 systemd-logind[1563]: Removed session 20. Sep 13 01:38:26.166059 systemd[1]: Started sshd@18-10.200.20.20:22-10.200.16.10:56072.service. Sep 13 01:38:26.577771 sshd[4125]: Accepted publickey for core from 10.200.16.10 port 56072 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:26.579511 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:26.583569 systemd-logind[1563]: New session 21 of user core. Sep 13 01:38:26.583914 systemd[1]: Started session-21.scope. Sep 13 01:38:26.939591 sshd[4125]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:26.942615 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Sep 13 01:38:26.942760 systemd[1]: sshd@18-10.200.20.20:22-10.200.16.10:56072.service: Deactivated successfully. Sep 13 01:38:26.943642 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 01:38:26.944085 systemd-logind[1563]: Removed session 21. Sep 13 01:38:32.011509 systemd[1]: Started sshd@19-10.200.20.20:22-10.200.16.10:45220.service. Sep 13 01:38:32.425153 sshd[4139]: Accepted publickey for core from 10.200.16.10 port 45220 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:32.426834 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:32.431688 systemd[1]: Started session-22.scope. Sep 13 01:38:32.432618 systemd-logind[1563]: New session 22 of user core. Sep 13 01:38:32.798610 sshd[4139]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:32.801358 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Sep 13 01:38:32.802013 systemd[1]: sshd@19-10.200.20.20:22-10.200.16.10:45220.service: Deactivated successfully. Sep 13 01:38:32.802893 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 01:38:32.803951 systemd-logind[1563]: Removed session 22. Sep 13 01:38:37.866911 systemd[1]: Started sshd@20-10.200.20.20:22-10.200.16.10:45228.service. Sep 13 01:38:38.282023 sshd[4155]: Accepted publickey for core from 10.200.16.10 port 45228 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:38.283705 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:38.289119 systemd[1]: Started session-23.scope. Sep 13 01:38:38.289330 systemd-logind[1563]: New session 23 of user core. Sep 13 01:38:38.651603 sshd[4155]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:38.654318 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Sep 13 01:38:38.655006 systemd[1]: sshd@20-10.200.20.20:22-10.200.16.10:45228.service: Deactivated successfully. Sep 13 01:38:38.655778 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 01:38:38.656651 systemd-logind[1563]: Removed session 23. Sep 13 01:38:38.718909 systemd[1]: Started sshd@21-10.200.20.20:22-10.200.16.10:45230.service. Sep 13 01:38:39.135855 sshd[4169]: Accepted publickey for core from 10.200.16.10 port 45230 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:39.137541 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:39.141947 systemd[1]: Started session-24.scope. Sep 13 01:38:39.142611 systemd-logind[1563]: New session 24 of user core. Sep 13 01:38:41.215050 systemd[1]: run-containerd-runc-k8s.io-b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03-runc.2Mu8sz.mount: Deactivated successfully. Sep 13 01:38:41.220308 env[1591]: time="2025-09-13T01:38:41.220251526Z" level=info msg="StopContainer for \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\" with timeout 30 (s)" Sep 13 01:38:41.220767 env[1591]: time="2025-09-13T01:38:41.220727605Z" level=info msg="Stop container \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\" with signal terminated" Sep 13 01:38:41.241030 env[1591]: time="2025-09-13T01:38:41.240977768Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:38:41.247308 env[1591]: time="2025-09-13T01:38:41.247272916Z" level=info msg="StopContainer for \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\" with timeout 2 (s)" Sep 13 01:38:41.247804 env[1591]: time="2025-09-13T01:38:41.247773915Z" level=info msg="Stop container \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\" with signal terminated" Sep 13 01:38:41.253865 systemd-networkd[1769]: lxc_health: Link DOWN Sep 13 01:38:41.253873 systemd-networkd[1769]: lxc_health: Lost carrier Sep 13 01:38:41.264368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e-rootfs.mount: Deactivated successfully. Sep 13 01:38:41.291783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03-rootfs.mount: Deactivated successfully. Sep 13 01:38:41.324731 env[1591]: time="2025-09-13T01:38:41.324679613Z" level=info msg="shim disconnected" id=b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03 Sep 13 01:38:41.325016 env[1591]: time="2025-09-13T01:38:41.324988172Z" level=warning msg="cleaning up after shim disconnected" id=b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03 namespace=k8s.io Sep 13 01:38:41.325162 env[1591]: time="2025-09-13T01:38:41.325139532Z" level=info msg="cleaning up dead shim" Sep 13 01:38:41.325392 env[1591]: time="2025-09-13T01:38:41.325293771Z" level=info msg="shim disconnected" id=9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e Sep 13 01:38:41.325451 env[1591]: time="2025-09-13T01:38:41.325378371Z" level=warning msg="cleaning up after shim disconnected" id=9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e namespace=k8s.io Sep 13 01:38:41.325451 env[1591]: time="2025-09-13T01:38:41.325411251Z" level=info msg="cleaning up dead shim" Sep 13 01:38:41.334087 env[1591]: time="2025-09-13T01:38:41.334044035Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4242 runtime=io.containerd.runc.v2\n" Sep 13 01:38:41.336961 env[1591]: time="2025-09-13T01:38:41.336915790Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4243 runtime=io.containerd.runc.v2\n" Sep 13 01:38:41.339781 env[1591]: time="2025-09-13T01:38:41.339744265Z" level=info msg="StopContainer for \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\" returns successfully" Sep 13 01:38:41.340452 env[1591]: time="2025-09-13T01:38:41.340420463Z" level=info msg="StopPodSandbox for \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\"" Sep 13 01:38:41.341839 env[1591]: time="2025-09-13T01:38:41.340482663Z" level=info msg="Container to stop \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:41.343696 env[1591]: time="2025-09-13T01:38:41.343646697Z" level=info msg="StopContainer for \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\" returns successfully" Sep 13 01:38:41.344780 env[1591]: time="2025-09-13T01:38:41.344753735Z" level=info msg="StopPodSandbox for \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\"" Sep 13 01:38:41.345097 env[1591]: time="2025-09-13T01:38:41.345051375Z" level=info msg="Container to stop \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:41.345214 env[1591]: time="2025-09-13T01:38:41.345195855Z" level=info msg="Container to stop \"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:41.345481 env[1591]: time="2025-09-13T01:38:41.345455414Z" level=info msg="Container to stop \"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:41.345617 env[1591]: time="2025-09-13T01:38:41.345594334Z" level=info msg="Container to stop \"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:41.345708 env[1591]: time="2025-09-13T01:38:41.345690614Z" level=info msg="Container to stop \"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:41.391828 env[1591]: time="2025-09-13T01:38:41.391760768Z" level=info msg="shim disconnected" id=d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d Sep 13 01:38:41.391828 env[1591]: time="2025-09-13T01:38:41.391824208Z" level=warning msg="cleaning up after shim disconnected" id=d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d namespace=k8s.io Sep 13 01:38:41.391828 env[1591]: time="2025-09-13T01:38:41.391834328Z" level=info msg="cleaning up dead shim" Sep 13 01:38:41.392344 env[1591]: time="2025-09-13T01:38:41.392310567Z" level=info msg="shim disconnected" id=b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02 Sep 13 01:38:41.392405 env[1591]: time="2025-09-13T01:38:41.392345767Z" level=warning msg="cleaning up after shim disconnected" id=b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02 namespace=k8s.io Sep 13 01:38:41.392405 env[1591]: time="2025-09-13T01:38:41.392354367Z" level=info msg="cleaning up dead shim" Sep 13 01:38:41.402887 env[1591]: time="2025-09-13T01:38:41.402836028Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4308 runtime=io.containerd.runc.v2\n" Sep 13 01:38:41.403178 env[1591]: time="2025-09-13T01:38:41.403139667Z" level=info msg="TearDown network for sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" successfully" Sep 13 01:38:41.403178 env[1591]: time="2025-09-13T01:38:41.403169187Z" level=info msg="StopPodSandbox for \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" returns successfully" Sep 13 01:38:41.407043 env[1591]: time="2025-09-13T01:38:41.407001940Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4309 runtime=io.containerd.runc.v2\n" Sep 13 01:38:41.407735 env[1591]: time="2025-09-13T01:38:41.407707699Z" level=info msg="TearDown network for sandbox \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\" successfully" Sep 13 01:38:41.407844 env[1591]: time="2025-09-13T01:38:41.407826139Z" level=info msg="StopPodSandbox for \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\" returns successfully" Sep 13 01:38:41.515638 kubelet[2605]: I0913 01:38:41.514643 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-host-proc-sys-kernel\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.515638 kubelet[2605]: I0913 01:38:41.514696 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-run\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.515638 kubelet[2605]: I0913 01:38:41.514719 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cni-path\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.515638 kubelet[2605]: I0913 01:38:41.514735 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-xtables-lock\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.515638 kubelet[2605]: I0913 01:38:41.514765 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trxxs\" (UniqueName: \"kubernetes.io/projected/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-kube-api-access-trxxs\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.515638 kubelet[2605]: I0913 01:38:41.514781 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-cgroup\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.516111 kubelet[2605]: I0913 01:38:41.514796 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-lib-modules\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.516111 kubelet[2605]: I0913 01:38:41.514815 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/296ed9b3-bae5-467d-9626-f67b8f8b6e4a-cilium-config-path\") pod \"296ed9b3-bae5-467d-9626-f67b8f8b6e4a\" (UID: \"296ed9b3-bae5-467d-9626-f67b8f8b6e4a\") " Sep 13 01:38:41.516111 kubelet[2605]: I0913 01:38:41.514842 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-hubble-tls\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.516111 kubelet[2605]: I0913 01:38:41.514861 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhhvg\" (UniqueName: \"kubernetes.io/projected/296ed9b3-bae5-467d-9626-f67b8f8b6e4a-kube-api-access-qhhvg\") pod \"296ed9b3-bae5-467d-9626-f67b8f8b6e4a\" (UID: \"296ed9b3-bae5-467d-9626-f67b8f8b6e4a\") " Sep 13 01:38:41.516111 kubelet[2605]: I0913 01:38:41.514879 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-host-proc-sys-net\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.516111 kubelet[2605]: I0913 01:38:41.514898 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-clustermesh-secrets\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.516248 kubelet[2605]: I0913 01:38:41.514922 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-etc-cni-netd\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.516248 kubelet[2605]: I0913 01:38:41.514936 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-bpf-maps\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.516248 kubelet[2605]: I0913 01:38:41.514952 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-config-path\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.516248 kubelet[2605]: I0913 01:38:41.514967 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-hostproc\") pod \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\" (UID: \"cf6c76e8-917c-4f0c-9e0b-63aa17d2c380\") " Sep 13 01:38:41.516248 kubelet[2605]: I0913 01:38:41.515040 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-hostproc" (OuterVolumeSpecName: "hostproc") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:41.516248 kubelet[2605]: I0913 01:38:41.515086 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:41.516382 kubelet[2605]: I0913 01:38:41.515103 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:41.516382 kubelet[2605]: I0913 01:38:41.515116 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cni-path" (OuterVolumeSpecName: "cni-path") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:41.516382 kubelet[2605]: I0913 01:38:41.515131 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:41.516653 kubelet[2605]: I0913 01:38:41.516506 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:41.516653 kubelet[2605]: I0913 01:38:41.516548 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:41.517193 kubelet[2605]: I0913 01:38:41.516771 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:41.517341 kubelet[2605]: I0913 01:38:41.517324 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:41.519878 kubelet[2605]: I0913 01:38:41.517427 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:41.519981 kubelet[2605]: I0913 01:38:41.519202 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 01:38:41.520149 kubelet[2605]: I0913 01:38:41.520116 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 01:38:41.522007 kubelet[2605]: I0913 01:38:41.521984 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/296ed9b3-bae5-467d-9626-f67b8f8b6e4a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "296ed9b3-bae5-467d-9626-f67b8f8b6e4a" (UID: "296ed9b3-bae5-467d-9626-f67b8f8b6e4a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 01:38:41.522733 kubelet[2605]: I0913 01:38:41.522688 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-kube-api-access-trxxs" (OuterVolumeSpecName: "kube-api-access-trxxs") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "kube-api-access-trxxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:38:41.522806 kubelet[2605]: I0913 01:38:41.522792 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296ed9b3-bae5-467d-9626-f67b8f8b6e4a-kube-api-access-qhhvg" (OuterVolumeSpecName: "kube-api-access-qhhvg") pod "296ed9b3-bae5-467d-9626-f67b8f8b6e4a" (UID: "296ed9b3-bae5-467d-9626-f67b8f8b6e4a"). InnerVolumeSpecName "kube-api-access-qhhvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:38:41.524112 kubelet[2605]: I0913 01:38:41.524082 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" (UID: "cf6c76e8-917c-4f0c-9e0b-63aa17d2c380"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:38:41.616132 kubelet[2605]: I0913 01:38:41.616094 2605 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-config-path\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616132 kubelet[2605]: I0913 01:38:41.616125 2605 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-host-proc-sys-net\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616132 kubelet[2605]: I0913 01:38:41.616136 2605 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-clustermesh-secrets\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616335 kubelet[2605]: I0913 01:38:41.616155 2605 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-etc-cni-netd\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616335 kubelet[2605]: I0913 01:38:41.616165 2605 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-bpf-maps\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616335 kubelet[2605]: I0913 01:38:41.616174 2605 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-hostproc\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616335 kubelet[2605]: I0913 01:38:41.616182 2605 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cni-path\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616335 kubelet[2605]: I0913 01:38:41.616191 2605 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-xtables-lock\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616335 kubelet[2605]: I0913 01:38:41.616199 2605 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616335 kubelet[2605]: I0913 01:38:41.616208 2605 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-run\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616335 kubelet[2605]: I0913 01:38:41.616216 2605 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-cilium-cgroup\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616542 kubelet[2605]: I0913 01:38:41.616233 2605 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trxxs\" (UniqueName: \"kubernetes.io/projected/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-kube-api-access-trxxs\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616542 kubelet[2605]: I0913 01:38:41.616241 2605 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-lib-modules\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616542 kubelet[2605]: I0913 01:38:41.616250 2605 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/296ed9b3-bae5-467d-9626-f67b8f8b6e4a-cilium-config-path\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616542 kubelet[2605]: I0913 01:38:41.616258 2605 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380-hubble-tls\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.616542 kubelet[2605]: I0913 01:38:41.616267 2605 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhhvg\" (UniqueName: \"kubernetes.io/projected/296ed9b3-bae5-467d-9626-f67b8f8b6e4a-kube-api-access-qhhvg\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:41.930494 kubelet[2605]: I0913 01:38:41.930462 2605 scope.go:117] "RemoveContainer" containerID="9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e" Sep 13 01:38:41.933832 env[1591]: time="2025-09-13T01:38:41.933708365Z" level=info msg="RemoveContainer for \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\"" Sep 13 01:38:41.951015 env[1591]: time="2025-09-13T01:38:41.950965653Z" level=info msg="RemoveContainer for \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\" returns successfully" Sep 13 01:38:41.953091 env[1591]: time="2025-09-13T01:38:41.951559292Z" level=error msg="ContainerStatus for \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\": not found" Sep 13 01:38:41.953198 kubelet[2605]: I0913 01:38:41.951303 2605 scope.go:117] "RemoveContainer" containerID="9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e" Sep 13 01:38:41.953198 kubelet[2605]: E0913 01:38:41.951776 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\": not found" containerID="9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e" Sep 13 01:38:41.953198 kubelet[2605]: I0913 01:38:41.951806 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e"} err="failed to get container status \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\": rpc error: code = NotFound desc = an error occurred when try to find container \"9fe0d93b8c720c6411a54de6bf01db2fd083317b0d244f6d8e6c3b88730f221e\": not found" Sep 13 01:38:41.953198 kubelet[2605]: I0913 01:38:41.951880 2605 scope.go:117] "RemoveContainer" containerID="b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03" Sep 13 01:38:41.954439 env[1591]: time="2025-09-13T01:38:41.954378367Z" level=info msg="RemoveContainer for \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\"" Sep 13 01:38:41.964482 env[1591]: time="2025-09-13T01:38:41.964443588Z" level=info msg="RemoveContainer for \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\" returns successfully" Sep 13 01:38:41.964746 kubelet[2605]: I0913 01:38:41.964706 2605 scope.go:117] "RemoveContainer" containerID="0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db" Sep 13 01:38:41.971264 env[1591]: time="2025-09-13T01:38:41.971224896Z" level=info msg="RemoveContainer for \"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db\"" Sep 13 01:38:41.980659 env[1591]: time="2025-09-13T01:38:41.980607679Z" level=info msg="RemoveContainer for \"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db\" returns successfully" Sep 13 01:38:41.981536 kubelet[2605]: I0913 01:38:41.980923 2605 scope.go:117] "RemoveContainer" containerID="2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98" Sep 13 01:38:41.982780 env[1591]: time="2025-09-13T01:38:41.982753915Z" level=info msg="RemoveContainer for \"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98\"" Sep 13 01:38:41.990744 env[1591]: time="2025-09-13T01:38:41.990705780Z" level=info msg="RemoveContainer for \"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98\" returns successfully" Sep 13 01:38:41.991121 kubelet[2605]: I0913 01:38:41.991091 2605 scope.go:117] "RemoveContainer" containerID="a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c" Sep 13 01:38:41.992341 env[1591]: time="2025-09-13T01:38:41.992304417Z" level=info msg="RemoveContainer for \"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c\"" Sep 13 01:38:42.006019 env[1591]: time="2025-09-13T01:38:42.005976352Z" level=info msg="RemoveContainer for \"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c\" returns successfully" Sep 13 01:38:42.006303 kubelet[2605]: I0913 01:38:42.006270 2605 scope.go:117] "RemoveContainer" containerID="d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a" Sep 13 01:38:42.007534 env[1591]: time="2025-09-13T01:38:42.007494869Z" level=info msg="RemoveContainer for \"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a\"" Sep 13 01:38:42.014714 env[1591]: time="2025-09-13T01:38:42.014679576Z" level=info msg="RemoveContainer for \"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a\" returns successfully" Sep 13 01:38:42.015058 kubelet[2605]: I0913 01:38:42.015026 2605 scope.go:117] "RemoveContainer" containerID="b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03" Sep 13 01:38:42.015435 env[1591]: time="2025-09-13T01:38:42.015370655Z" level=error msg="ContainerStatus for \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\": not found" Sep 13 01:38:42.015774 kubelet[2605]: E0913 01:38:42.015632 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\": not found" containerID="b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03" Sep 13 01:38:42.015774 kubelet[2605]: I0913 01:38:42.015673 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03"} err="failed to get container status \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5d8ed04a091752aca4d81a1e94ee498bbc1ce4419c435435549ebc1234bba03\": not found" Sep 13 01:38:42.015774 kubelet[2605]: I0913 01:38:42.015697 2605 scope.go:117] "RemoveContainer" containerID="0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db" Sep 13 01:38:42.015902 env[1591]: time="2025-09-13T01:38:42.015844614Z" level=error msg="ContainerStatus for \"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db\": not found" Sep 13 01:38:42.016206 kubelet[2605]: E0913 01:38:42.016061 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db\": not found" containerID="0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db" Sep 13 01:38:42.016206 kubelet[2605]: I0913 01:38:42.016084 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db"} err="failed to get container status \"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a7b8cf6acaaccad8131fc7ff8675be18c345c49444817a39ff75d3ef7d514db\": not found" Sep 13 01:38:42.016206 kubelet[2605]: I0913 01:38:42.016112 2605 scope.go:117] "RemoveContainer" containerID="2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98" Sep 13 01:38:42.016323 env[1591]: time="2025-09-13T01:38:42.016265134Z" level=error msg="ContainerStatus for \"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98\": not found" Sep 13 01:38:42.016616 kubelet[2605]: E0913 01:38:42.016477 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98\": not found" containerID="2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98" Sep 13 01:38:42.016616 kubelet[2605]: I0913 01:38:42.016509 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98"} err="failed to get container status \"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98\": rpc error: code = NotFound desc = an error occurred when try to find container \"2446121784397cf7d2a5f335f1a4435cf4853a089d12d8cd0e045a6591cf9a98\": not found" Sep 13 01:38:42.016616 kubelet[2605]: I0913 01:38:42.016524 2605 scope.go:117] "RemoveContainer" containerID="a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c" Sep 13 01:38:42.016738 env[1591]: time="2025-09-13T01:38:42.016659893Z" level=error msg="ContainerStatus for \"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c\": not found" Sep 13 01:38:42.016981 kubelet[2605]: E0913 01:38:42.016857 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c\": not found" containerID="a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c" Sep 13 01:38:42.016981 kubelet[2605]: I0913 01:38:42.016877 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c"} err="failed to get container status \"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a94396db844db7b3d1a6b07ad0e563f36aac3c80903299ed20a1db22655da39c\": not found" Sep 13 01:38:42.016981 kubelet[2605]: I0913 01:38:42.016907 2605 scope.go:117] "RemoveContainer" containerID="d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a" Sep 13 01:38:42.017100 env[1591]: time="2025-09-13T01:38:42.017019372Z" level=error msg="ContainerStatus for \"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a\": not found" Sep 13 01:38:42.017295 kubelet[2605]: E0913 01:38:42.017218 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a\": not found" containerID="d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a" Sep 13 01:38:42.017295 kubelet[2605]: I0913 01:38:42.017239 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a"} err="failed to get container status \"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3ec7b580907d49f15fb3ba6bad4ae71d0d01dffecb1438de5b5fd046c9a6a0a\": not found" Sep 13 01:38:42.207054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02-rootfs.mount: Deactivated successfully. Sep 13 01:38:42.207193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02-shm.mount: Deactivated successfully. Sep 13 01:38:42.207284 systemd[1]: var-lib-kubelet-pods-296ed9b3\x2dbae5\x2d467d\x2d9626\x2df67b8f8b6e4a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqhhvg.mount: Deactivated successfully. Sep 13 01:38:42.207365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d-rootfs.mount: Deactivated successfully. Sep 13 01:38:42.207458 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d-shm.mount: Deactivated successfully. Sep 13 01:38:42.207538 systemd[1]: var-lib-kubelet-pods-cf6c76e8\x2d917c\x2d4f0c\x2d9e0b\x2d63aa17d2c380-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtrxxs.mount: Deactivated successfully. Sep 13 01:38:42.207668 systemd[1]: var-lib-kubelet-pods-cf6c76e8\x2d917c\x2d4f0c\x2d9e0b\x2d63aa17d2c380-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:38:42.207750 systemd[1]: var-lib-kubelet-pods-cf6c76e8\x2d917c\x2d4f0c\x2d9e0b\x2d63aa17d2c380-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:38:42.459931 kubelet[2605]: I0913 01:38:42.459485 2605 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="296ed9b3-bae5-467d-9626-f67b8f8b6e4a" path="/var/lib/kubelet/pods/296ed9b3-bae5-467d-9626-f67b8f8b6e4a/volumes" Sep 13 01:38:42.460477 kubelet[2605]: I0913 01:38:42.460459 2605 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" path="/var/lib/kubelet/pods/cf6c76e8-917c-4f0c-9e0b-63aa17d2c380/volumes" Sep 13 01:38:43.217613 sshd[4169]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:43.220562 systemd[1]: sshd@21-10.200.20.20:22-10.200.16.10:45230.service: Deactivated successfully. Sep 13 01:38:43.221921 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Sep 13 01:38:43.222633 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 01:38:43.223623 systemd-logind[1563]: Removed session 24. Sep 13 01:38:43.289835 systemd[1]: Started sshd@22-10.200.20.20:22-10.200.16.10:34656.service. Sep 13 01:38:43.703919 sshd[4342]: Accepted publickey for core from 10.200.16.10 port 34656 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:43.705219 sshd[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:43.709902 systemd[1]: Started session-25.scope. Sep 13 01:38:43.710093 systemd-logind[1563]: New session 25 of user core. Sep 13 01:38:44.578131 kubelet[2605]: E0913 01:38:44.578096 2605 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:38:45.142005 kubelet[2605]: E0913 01:38:45.141961 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" containerName="apply-sysctl-overwrites" Sep 13 01:38:45.142189 kubelet[2605]: E0913 01:38:45.142176 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" containerName="clean-cilium-state" Sep 13 01:38:45.142245 kubelet[2605]: E0913 01:38:45.142236 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" containerName="cilium-agent" Sep 13 01:38:45.142299 kubelet[2605]: E0913 01:38:45.142289 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" containerName="mount-cgroup" Sep 13 01:38:45.142350 kubelet[2605]: E0913 01:38:45.142340 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="296ed9b3-bae5-467d-9626-f67b8f8b6e4a" containerName="cilium-operator" Sep 13 01:38:45.142450 kubelet[2605]: E0913 01:38:45.142439 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" containerName="mount-bpf-fs" Sep 13 01:38:45.142537 kubelet[2605]: I0913 01:38:45.142526 2605 memory_manager.go:354] "RemoveStaleState removing state" podUID="296ed9b3-bae5-467d-9626-f67b8f8b6e4a" containerName="cilium-operator" Sep 13 01:38:45.142594 kubelet[2605]: I0913 01:38:45.142585 2605 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6c76e8-917c-4f0c-9e0b-63aa17d2c380" containerName="cilium-agent" Sep 13 01:38:45.177444 sshd[4342]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:45.180879 systemd[1]: sshd@22-10.200.20.20:22-10.200.16.10:34656.service: Deactivated successfully. Sep 13 01:38:45.181667 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 01:38:45.182044 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. Sep 13 01:38:45.182781 systemd-logind[1563]: Removed session 25. Sep 13 01:38:45.236653 kubelet[2605]: I0913 01:38:45.236615 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-cgroup\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.236883 kubelet[2605]: I0913 01:38:45.236866 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-hostproc\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.236987 kubelet[2605]: I0913 01:38:45.236975 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-xtables-lock\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237065 kubelet[2605]: I0913 01:38:45.237053 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-run\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237136 kubelet[2605]: I0913 01:38:45.237125 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-bpf-maps\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237209 kubelet[2605]: I0913 01:38:45.237197 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-etc-cni-netd\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237278 kubelet[2605]: I0913 01:38:45.237265 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-lib-modules\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237364 kubelet[2605]: I0913 01:38:45.237352 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-clustermesh-secrets\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237465 kubelet[2605]: I0913 01:38:45.237453 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn9zq\" (UniqueName: \"kubernetes.io/projected/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-kube-api-access-jn9zq\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237548 kubelet[2605]: I0913 01:38:45.237537 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cni-path\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237628 kubelet[2605]: I0913 01:38:45.237616 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-ipsec-secrets\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237702 kubelet[2605]: I0913 01:38:45.237688 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-host-proc-sys-net\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237784 kubelet[2605]: I0913 01:38:45.237773 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-host-proc-sys-kernel\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237863 kubelet[2605]: I0913 01:38:45.237851 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-hubble-tls\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.237984 kubelet[2605]: I0913 01:38:45.237971 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-config-path\") pod \"cilium-6j9xr\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " pod="kube-system/cilium-6j9xr" Sep 13 01:38:45.244605 systemd[1]: Started sshd@23-10.200.20.20:22-10.200.16.10:34660.service. Sep 13 01:38:45.446588 env[1591]: time="2025-09-13T01:38:45.445851974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6j9xr,Uid:bd7cdb66-7207-4155-b841-2c30e7e1ff3b,Namespace:kube-system,Attempt:0,}" Sep 13 01:38:45.476855 env[1591]: time="2025-09-13T01:38:45.476663205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:38:45.476855 env[1591]: time="2025-09-13T01:38:45.476703925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:38:45.476855 env[1591]: time="2025-09-13T01:38:45.476714565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:38:45.477055 env[1591]: time="2025-09-13T01:38:45.476891125Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70 pid=4368 runtime=io.containerd.runc.v2 Sep 13 01:38:45.512524 env[1591]: time="2025-09-13T01:38:45.512469829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6j9xr,Uid:bd7cdb66-7207-4155-b841-2c30e7e1ff3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\"" Sep 13 01:38:45.517244 env[1591]: time="2025-09-13T01:38:45.517208261Z" level=info msg="CreateContainer within sandbox \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:38:45.549570 env[1591]: time="2025-09-13T01:38:45.549488771Z" level=info msg="CreateContainer within sandbox \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af810166c1104f4e9f8d05e33fbe19162c1a2213b8fa6f405fb53aa982ac66be\"" Sep 13 01:38:45.551623 env[1591]: time="2025-09-13T01:38:45.550006690Z" level=info msg="StartContainer for \"af810166c1104f4e9f8d05e33fbe19162c1a2213b8fa6f405fb53aa982ac66be\"" Sep 13 01:38:45.603267 env[1591]: time="2025-09-13T01:38:45.603226366Z" level=info msg="StartContainer for \"af810166c1104f4e9f8d05e33fbe19162c1a2213b8fa6f405fb53aa982ac66be\" returns successfully" Sep 13 01:38:45.658625 sshd[4354]: Accepted publickey for core from 10.200.16.10 port 34660 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:45.660022 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:45.664591 systemd[1]: Started session-26.scope. Sep 13 01:38:45.665502 systemd-logind[1563]: New session 26 of user core. Sep 13 01:38:45.667973 env[1591]: time="2025-09-13T01:38:45.667918225Z" level=info msg="shim disconnected" id=af810166c1104f4e9f8d05e33fbe19162c1a2213b8fa6f405fb53aa982ac66be Sep 13 01:38:45.668145 env[1591]: time="2025-09-13T01:38:45.668117344Z" level=warning msg="cleaning up after shim disconnected" id=af810166c1104f4e9f8d05e33fbe19162c1a2213b8fa6f405fb53aa982ac66be namespace=k8s.io Sep 13 01:38:45.668145 env[1591]: time="2025-09-13T01:38:45.668139024Z" level=info msg="cleaning up dead shim" Sep 13 01:38:45.681280 env[1591]: time="2025-09-13T01:38:45.681227964Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4455 runtime=io.containerd.runc.v2\n" Sep 13 01:38:45.957000 env[1591]: time="2025-09-13T01:38:45.956961690Z" level=info msg="CreateContainer within sandbox \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:38:45.992429 env[1591]: time="2025-09-13T01:38:45.992347715Z" level=info msg="CreateContainer within sandbox \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"206ed148152eaf472db27dcc0b57a8c903b81a04751c503c00bef9a0981aa661\"" Sep 13 01:38:45.994358 env[1591]: time="2025-09-13T01:38:45.994318712Z" level=info msg="StartContainer for \"206ed148152eaf472db27dcc0b57a8c903b81a04751c503c00bef9a0981aa661\"" Sep 13 01:38:46.057470 env[1591]: time="2025-09-13T01:38:46.057426616Z" level=info msg="StartContainer for \"206ed148152eaf472db27dcc0b57a8c903b81a04751c503c00bef9a0981aa661\" returns successfully" Sep 13 01:38:46.060287 sshd[4354]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:46.063760 systemd[1]: sshd@23-10.200.20.20:22-10.200.16.10:34660.service: Deactivated successfully. Sep 13 01:38:46.064557 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 01:38:46.066299 systemd-logind[1563]: Session 26 logged out. Waiting for processes to exit. Sep 13 01:38:46.068496 systemd-logind[1563]: Removed session 26. Sep 13 01:38:46.094303 env[1591]: time="2025-09-13T01:38:46.094256001Z" level=info msg="shim disconnected" id=206ed148152eaf472db27dcc0b57a8c903b81a04751c503c00bef9a0981aa661 Sep 13 01:38:46.094303 env[1591]: time="2025-09-13T01:38:46.094298281Z" level=warning msg="cleaning up after shim disconnected" id=206ed148152eaf472db27dcc0b57a8c903b81a04751c503c00bef9a0981aa661 namespace=k8s.io Sep 13 01:38:46.094303 env[1591]: time="2025-09-13T01:38:46.094309681Z" level=info msg="cleaning up dead shim" Sep 13 01:38:46.101619 env[1591]: time="2025-09-13T01:38:46.101570750Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4525 runtime=io.containerd.runc.v2\n" Sep 13 01:38:46.127471 systemd[1]: Started sshd@24-10.200.20.20:22-10.200.16.10:34668.service. Sep 13 01:38:46.538914 sshd[4537]: Accepted publickey for core from 10.200.16.10 port 34668 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:46.540609 sshd[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:46.545484 systemd[1]: Started session-27.scope. Sep 13 01:38:46.545919 systemd-logind[1563]: New session 27 of user core. Sep 13 01:38:46.951259 env[1591]: time="2025-09-13T01:38:46.951222911Z" level=info msg="StopPodSandbox for \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\"" Sep 13 01:38:46.951836 env[1591]: time="2025-09-13T01:38:46.951809071Z" level=info msg="Container to stop \"206ed148152eaf472db27dcc0b57a8c903b81a04751c503c00bef9a0981aa661\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:46.951936 env[1591]: time="2025-09-13T01:38:46.951917630Z" level=info msg="Container to stop \"af810166c1104f4e9f8d05e33fbe19162c1a2213b8fa6f405fb53aa982ac66be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:46.954064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70-shm.mount: Deactivated successfully. Sep 13 01:38:46.981591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70-rootfs.mount: Deactivated successfully. Sep 13 01:38:46.996554 env[1591]: time="2025-09-13T01:38:46.996499803Z" level=info msg="shim disconnected" id=cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70 Sep 13 01:38:46.996775 env[1591]: time="2025-09-13T01:38:46.996757283Z" level=warning msg="cleaning up after shim disconnected" id=cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70 namespace=k8s.io Sep 13 01:38:46.996833 env[1591]: time="2025-09-13T01:38:46.996820803Z" level=info msg="cleaning up dead shim" Sep 13 01:38:47.004097 env[1591]: time="2025-09-13T01:38:47.004056752Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4567 runtime=io.containerd.runc.v2\n" Sep 13 01:38:47.004604 env[1591]: time="2025-09-13T01:38:47.004579031Z" level=info msg="TearDown network for sandbox \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\" successfully" Sep 13 01:38:47.004712 env[1591]: time="2025-09-13T01:38:47.004694951Z" level=info msg="StopPodSandbox for \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\" returns successfully" Sep 13 01:38:47.150027 kubelet[2605]: I0913 01:38:47.149578 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-hubble-tls\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150027 kubelet[2605]: I0913 01:38:47.149617 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-cgroup\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150027 kubelet[2605]: I0913 01:38:47.149645 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-lib-modules\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150027 kubelet[2605]: I0913 01:38:47.149662 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-host-proc-sys-net\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150027 kubelet[2605]: I0913 01:38:47.149678 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-run\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150027 kubelet[2605]: I0913 01:38:47.149693 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-xtables-lock\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150545 kubelet[2605]: I0913 01:38:47.149727 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-config-path\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150545 kubelet[2605]: I0913 01:38:47.149745 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-hostproc\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150545 kubelet[2605]: I0913 01:38:47.149763 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-clustermesh-secrets\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150545 kubelet[2605]: I0913 01:38:47.149786 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cni-path\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150545 kubelet[2605]: I0913 01:38:47.149804 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-ipsec-secrets\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150545 kubelet[2605]: I0913 01:38:47.149820 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-bpf-maps\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150687 kubelet[2605]: I0913 01:38:47.149834 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-host-proc-sys-kernel\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150687 kubelet[2605]: I0913 01:38:47.149863 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn9zq\" (UniqueName: \"kubernetes.io/projected/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-kube-api-access-jn9zq\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150687 kubelet[2605]: I0913 01:38:47.149883 2605 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-etc-cni-netd\") pod \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\" (UID: \"bd7cdb66-7207-4155-b841-2c30e7e1ff3b\") " Sep 13 01:38:47.150687 kubelet[2605]: I0913 01:38:47.149967 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:47.150687 kubelet[2605]: I0913 01:38:47.150455 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-hostproc" (OuterVolumeSpecName: "hostproc") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:47.150814 kubelet[2605]: I0913 01:38:47.150486 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:47.150814 kubelet[2605]: I0913 01:38:47.150501 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:47.150814 kubelet[2605]: I0913 01:38:47.150515 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:47.150814 kubelet[2605]: I0913 01:38:47.150528 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:47.150814 kubelet[2605]: I0913 01:38:47.150541 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:47.152448 kubelet[2605]: I0913 01:38:47.152398 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 01:38:47.152564 kubelet[2605]: I0913 01:38:47.152468 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:47.152564 kubelet[2605]: I0913 01:38:47.152485 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cni-path" (OuterVolumeSpecName: "cni-path") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:47.153058 kubelet[2605]: I0913 01:38:47.153006 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:47.156520 systemd[1]: var-lib-kubelet-pods-bd7cdb66\x2d7207\x2d4155\x2db841\x2d2c30e7e1ff3b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 01:38:47.159767 systemd[1]: var-lib-kubelet-pods-bd7cdb66\x2d7207\x2d4155\x2db841\x2d2c30e7e1ff3b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djn9zq.mount: Deactivated successfully. Sep 13 01:38:47.159899 systemd[1]: var-lib-kubelet-pods-bd7cdb66\x2d7207\x2d4155\x2db841\x2d2c30e7e1ff3b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:38:47.161136 kubelet[2605]: I0913 01:38:47.161106 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 01:38:47.161599 kubelet[2605]: I0913 01:38:47.161485 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:38:47.162358 kubelet[2605]: I0913 01:38:47.162334 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-kube-api-access-jn9zq" (OuterVolumeSpecName: "kube-api-access-jn9zq") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "kube-api-access-jn9zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:38:47.164176 kubelet[2605]: I0913 01:38:47.164150 2605 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bd7cdb66-7207-4155-b841-2c30e7e1ff3b" (UID: "bd7cdb66-7207-4155-b841-2c30e7e1ff3b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 01:38:47.250658 kubelet[2605]: I0913 01:38:47.250561 2605 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-bpf-maps\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.250817 kubelet[2605]: I0913 01:38:47.250802 2605 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn9zq\" (UniqueName: \"kubernetes.io/projected/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-kube-api-access-jn9zq\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.250955 kubelet[2605]: I0913 01:38:47.250920 2605 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251019 kubelet[2605]: I0913 01:38:47.251008 2605 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-etc-cni-netd\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251080 kubelet[2605]: I0913 01:38:47.251070 2605 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-hubble-tls\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251141 kubelet[2605]: I0913 01:38:47.251131 2605 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-cgroup\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251198 kubelet[2605]: I0913 01:38:47.251187 2605 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-host-proc-sys-net\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251256 kubelet[2605]: I0913 01:38:47.251246 2605 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-lib-modules\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251312 kubelet[2605]: I0913 01:38:47.251302 2605 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-xtables-lock\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251372 kubelet[2605]: I0913 01:38:47.251362 2605 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-run\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251460 kubelet[2605]: I0913 01:38:47.251449 2605 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cni-path\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251514 kubelet[2605]: I0913 01:38:47.251504 2605 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251569 kubelet[2605]: I0913 01:38:47.251558 2605 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-cilium-config-path\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251625 kubelet[2605]: I0913 01:38:47.251615 2605 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-hostproc\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.251681 kubelet[2605]: I0913 01:38:47.251671 2605 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd7cdb66-7207-4155-b841-2c30e7e1ff3b-clustermesh-secrets\") on node \"ci-3510.3.8-n-8d5f1b2fe1\" DevicePath \"\"" Sep 13 01:38:47.343998 systemd[1]: var-lib-kubelet-pods-bd7cdb66\x2d7207\x2d4155\x2db841\x2d2c30e7e1ff3b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:38:47.953816 kubelet[2605]: I0913 01:38:47.953785 2605 scope.go:117] "RemoveContainer" containerID="206ed148152eaf472db27dcc0b57a8c903b81a04751c503c00bef9a0981aa661" Sep 13 01:38:47.955444 env[1591]: time="2025-09-13T01:38:47.955409184Z" level=info msg="RemoveContainer for \"206ed148152eaf472db27dcc0b57a8c903b81a04751c503c00bef9a0981aa661\"" Sep 13 01:38:47.964761 env[1591]: time="2025-09-13T01:38:47.964720450Z" level=info msg="RemoveContainer for \"206ed148152eaf472db27dcc0b57a8c903b81a04751c503c00bef9a0981aa661\" returns successfully" Sep 13 01:38:47.965124 kubelet[2605]: I0913 01:38:47.965105 2605 scope.go:117] "RemoveContainer" containerID="af810166c1104f4e9f8d05e33fbe19162c1a2213b8fa6f405fb53aa982ac66be" Sep 13 01:38:47.966569 env[1591]: time="2025-09-13T01:38:47.966539008Z" level=info msg="RemoveContainer for \"af810166c1104f4e9f8d05e33fbe19162c1a2213b8fa6f405fb53aa982ac66be\"" Sep 13 01:38:47.982273 env[1591]: time="2025-09-13T01:38:47.982217905Z" level=info msg="RemoveContainer for \"af810166c1104f4e9f8d05e33fbe19162c1a2213b8fa6f405fb53aa982ac66be\" returns successfully" Sep 13 01:38:48.007994 kubelet[2605]: E0913 01:38:48.007942 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd7cdb66-7207-4155-b841-2c30e7e1ff3b" containerName="mount-cgroup" Sep 13 01:38:48.007994 kubelet[2605]: E0913 01:38:48.007982 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd7cdb66-7207-4155-b841-2c30e7e1ff3b" containerName="apply-sysctl-overwrites" Sep 13 01:38:48.008172 kubelet[2605]: I0913 01:38:48.008007 2605 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7cdb66-7207-4155-b841-2c30e7e1ff3b" containerName="apply-sysctl-overwrites" Sep 13 01:38:48.156347 kubelet[2605]: I0913 01:38:48.156315 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-cilium-cgroup\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.156816 kubelet[2605]: I0913 01:38:48.156795 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-host-proc-sys-net\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.156913 kubelet[2605]: I0913 01:38:48.156901 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-cilium-ipsec-secrets\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157002 kubelet[2605]: I0913 01:38:48.156990 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-bpf-maps\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157090 kubelet[2605]: I0913 01:38:48.157077 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-hostproc\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157179 kubelet[2605]: I0913 01:38:48.157167 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bckgp\" (UniqueName: \"kubernetes.io/projected/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-kube-api-access-bckgp\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157273 kubelet[2605]: I0913 01:38:48.157262 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-etc-cni-netd\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157357 kubelet[2605]: I0913 01:38:48.157346 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-cilium-config-path\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157472 kubelet[2605]: I0913 01:38:48.157459 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-hubble-tls\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157568 kubelet[2605]: I0913 01:38:48.157555 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-cilium-run\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157651 kubelet[2605]: I0913 01:38:48.157640 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-cni-path\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157738 kubelet[2605]: I0913 01:38:48.157726 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-clustermesh-secrets\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157823 kubelet[2605]: I0913 01:38:48.157812 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-host-proc-sys-kernel\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157907 kubelet[2605]: I0913 01:38:48.157897 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-xtables-lock\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.157999 kubelet[2605]: I0913 01:38:48.157979 2605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef96f6ef-1acf-4854-9b51-361f8c0d89d1-lib-modules\") pod \"cilium-h6xb5\" (UID: \"ef96f6ef-1acf-4854-9b51-361f8c0d89d1\") " pod="kube-system/cilium-h6xb5" Sep 13 01:38:48.311740 env[1591]: time="2025-09-13T01:38:48.311698851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h6xb5,Uid:ef96f6ef-1acf-4854-9b51-361f8c0d89d1,Namespace:kube-system,Attempt:0,}" Sep 13 01:38:48.348765 env[1591]: time="2025-09-13T01:38:48.348690481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:38:48.348765 env[1591]: time="2025-09-13T01:38:48.348731841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:38:48.348765 env[1591]: time="2025-09-13T01:38:48.348741841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:38:48.349160 env[1591]: time="2025-09-13T01:38:48.349119600Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984 pid=4596 runtime=io.containerd.runc.v2 Sep 13 01:38:48.391091 env[1591]: time="2025-09-13T01:38:48.391016663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h6xb5,Uid:ef96f6ef-1acf-4854-9b51-361f8c0d89d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984\"" Sep 13 01:38:48.394707 env[1591]: time="2025-09-13T01:38:48.394667018Z" level=info msg="CreateContainer within sandbox \"14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:38:48.421139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1160808079.mount: Deactivated successfully. Sep 13 01:38:48.437682 env[1591]: time="2025-09-13T01:38:48.437631119Z" level=info msg="CreateContainer within sandbox \"14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83c3459a5a63fb6c3a5bdfed91263d5a6e88e37d18b8525a2f0cca0efd8da4ec\"" Sep 13 01:38:48.438649 env[1591]: time="2025-09-13T01:38:48.438611957Z" level=info msg="StartContainer for \"83c3459a5a63fb6c3a5bdfed91263d5a6e88e37d18b8525a2f0cca0efd8da4ec\"" Sep 13 01:38:48.461237 kubelet[2605]: I0913 01:38:48.460909 2605 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7cdb66-7207-4155-b841-2c30e7e1ff3b" path="/var/lib/kubelet/pods/bd7cdb66-7207-4155-b841-2c30e7e1ff3b/volumes" Sep 13 01:38:48.486687 env[1591]: time="2025-09-13T01:38:48.486630051Z" level=info msg="StartContainer for \"83c3459a5a63fb6c3a5bdfed91263d5a6e88e37d18b8525a2f0cca0efd8da4ec\" returns successfully" Sep 13 01:38:48.557526 env[1591]: time="2025-09-13T01:38:48.557474834Z" level=info msg="shim disconnected" id=83c3459a5a63fb6c3a5bdfed91263d5a6e88e37d18b8525a2f0cca0efd8da4ec Sep 13 01:38:48.557815 env[1591]: time="2025-09-13T01:38:48.557786914Z" level=warning msg="cleaning up after shim disconnected" id=83c3459a5a63fb6c3a5bdfed91263d5a6e88e37d18b8525a2f0cca0efd8da4ec namespace=k8s.io Sep 13 01:38:48.557896 env[1591]: time="2025-09-13T01:38:48.557881793Z" level=info msg="cleaning up dead shim" Sep 13 01:38:48.564584 env[1591]: time="2025-09-13T01:38:48.563998585Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4676 runtime=io.containerd.runc.v2\n" Sep 13 01:38:48.893073 kubelet[2605]: I0913 01:38:48.892947 2605 setters.go:600] "Node became not ready" node="ci-3510.3.8-n-8d5f1b2fe1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T01:38:48Z","lastTransitionTime":"2025-09-13T01:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 01:38:48.959750 env[1591]: time="2025-09-13T01:38:48.959699282Z" level=info msg="CreateContainer within sandbox \"14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:38:49.316025 env[1591]: time="2025-09-13T01:38:49.315974372Z" level=info msg="CreateContainer within sandbox \"14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dfab288d58f2c217069a3309db2bf9a3e4747cdbd127c9722f3022980cfbae42\"" Sep 13 01:38:49.316552 env[1591]: time="2025-09-13T01:38:49.316525851Z" level=info msg="StartContainer for \"dfab288d58f2c217069a3309db2bf9a3e4747cdbd127c9722f3022980cfbae42\"" Sep 13 01:38:49.370024 env[1591]: time="2025-09-13T01:38:49.369981301Z" level=info msg="StartContainer for \"dfab288d58f2c217069a3309db2bf9a3e4747cdbd127c9722f3022980cfbae42\" returns successfully" Sep 13 01:38:49.395023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfab288d58f2c217069a3309db2bf9a3e4747cdbd127c9722f3022980cfbae42-rootfs.mount: Deactivated successfully. Sep 13 01:38:49.409757 env[1591]: time="2025-09-13T01:38:49.409714049Z" level=info msg="shim disconnected" id=dfab288d58f2c217069a3309db2bf9a3e4747cdbd127c9722f3022980cfbae42 Sep 13 01:38:49.410028 env[1591]: time="2025-09-13T01:38:49.410009609Z" level=warning msg="cleaning up after shim disconnected" id=dfab288d58f2c217069a3309db2bf9a3e4747cdbd127c9722f3022980cfbae42 namespace=k8s.io Sep 13 01:38:49.410095 env[1591]: time="2025-09-13T01:38:49.410082249Z" level=info msg="cleaning up dead shim" Sep 13 01:38:49.417305 env[1591]: time="2025-09-13T01:38:49.417266480Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4737 runtime=io.containerd.runc.v2\n" Sep 13 01:38:49.579971 kubelet[2605]: E0913 01:38:49.579475 2605 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:38:49.963992 env[1591]: time="2025-09-13T01:38:49.963874764Z" level=info msg="CreateContainer within sandbox \"14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:38:49.991488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307300806.mount: Deactivated successfully. Sep 13 01:38:50.002142 env[1591]: time="2025-09-13T01:38:50.002079954Z" level=info msg="CreateContainer within sandbox \"14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1b9221f18976c5d32ff9e1fc85761546b9f7e7b392017f61b24d096f97d33a73\"" Sep 13 01:38:50.004443 env[1591]: time="2025-09-13T01:38:50.002975713Z" level=info msg="StartContainer for \"1b9221f18976c5d32ff9e1fc85761546b9f7e7b392017f61b24d096f97d33a73\"" Sep 13 01:38:50.062960 env[1591]: time="2025-09-13T01:38:50.062906078Z" level=info msg="StartContainer for \"1b9221f18976c5d32ff9e1fc85761546b9f7e7b392017f61b24d096f97d33a73\" returns successfully" Sep 13 01:38:50.089405 env[1591]: time="2025-09-13T01:38:50.089341925Z" level=info msg="shim disconnected" id=1b9221f18976c5d32ff9e1fc85761546b9f7e7b392017f61b24d096f97d33a73 Sep 13 01:38:50.089601 env[1591]: time="2025-09-13T01:38:50.089455685Z" level=warning msg="cleaning up after shim disconnected" id=1b9221f18976c5d32ff9e1fc85761546b9f7e7b392017f61b24d096f97d33a73 namespace=k8s.io Sep 13 01:38:50.089601 env[1591]: time="2025-09-13T01:38:50.089468405Z" level=info msg="cleaning up dead shim" Sep 13 01:38:50.096893 env[1591]: time="2025-09-13T01:38:50.096846916Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4797 runtime=io.containerd.runc.v2\n" Sep 13 01:38:50.968261 env[1591]: time="2025-09-13T01:38:50.968222510Z" level=info msg="CreateContainer within sandbox \"14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:38:51.245565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722504009.mount: Deactivated successfully. Sep 13 01:38:51.260959 env[1591]: time="2025-09-13T01:38:51.260905961Z" level=info msg="CreateContainer within sandbox \"14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"36e28b76ceaa5a230ec9e601dcd64831d6e72e863060acb046c15c5e6e24a298\"" Sep 13 01:38:51.261488 env[1591]: time="2025-09-13T01:38:51.261456960Z" level=info msg="StartContainer for \"36e28b76ceaa5a230ec9e601dcd64831d6e72e863060acb046c15c5e6e24a298\"" Sep 13 01:38:51.309863 env[1591]: time="2025-09-13T01:38:51.309818903Z" level=info msg="StartContainer for \"36e28b76ceaa5a230ec9e601dcd64831d6e72e863060acb046c15c5e6e24a298\" returns successfully" Sep 13 01:38:51.341846 env[1591]: time="2025-09-13T01:38:51.341803625Z" level=info msg="shim disconnected" id=36e28b76ceaa5a230ec9e601dcd64831d6e72e863060acb046c15c5e6e24a298 Sep 13 01:38:51.342147 env[1591]: time="2025-09-13T01:38:51.342118465Z" level=warning msg="cleaning up after shim disconnected" id=36e28b76ceaa5a230ec9e601dcd64831d6e72e863060acb046c15c5e6e24a298 namespace=k8s.io Sep 13 01:38:51.342222 env[1591]: time="2025-09-13T01:38:51.342208505Z" level=info msg="cleaning up dead shim" Sep 13 01:38:51.344244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36e28b76ceaa5a230ec9e601dcd64831d6e72e863060acb046c15c5e6e24a298-rootfs.mount: Deactivated successfully. Sep 13 01:38:51.353895 env[1591]: time="2025-09-13T01:38:51.353848291Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4852 runtime=io.containerd.runc.v2\n" Sep 13 01:38:51.973669 env[1591]: time="2025-09-13T01:38:51.973627357Z" level=info msg="CreateContainer within sandbox \"14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:38:52.000577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628321614.mount: Deactivated successfully. Sep 13 01:38:52.020156 env[1591]: time="2025-09-13T01:38:52.020100543Z" level=info msg="CreateContainer within sandbox \"14c2097637919a63680603273fbb58b0a43468022716fc39bad82ab2adbcd984\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e0ce608db15e77d7ef58ab96d977761083b6d854ed79b08485dff5f3a9efb82\"" Sep 13 01:38:52.020759 env[1591]: time="2025-09-13T01:38:52.020733662Z" level=info msg="StartContainer for \"3e0ce608db15e77d7ef58ab96d977761083b6d854ed79b08485dff5f3a9efb82\"" Sep 13 01:38:52.075533 env[1591]: time="2025-09-13T01:38:52.075474041Z" level=info msg="StartContainer for \"3e0ce608db15e77d7ef58ab96d977761083b6d854ed79b08485dff5f3a9efb82\" returns successfully" Sep 13 01:38:52.592407 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 13 01:38:55.250742 systemd-networkd[1769]: lxc_health: Link UP Sep 13 01:38:55.281979 systemd-networkd[1769]: lxc_health: Gained carrier Sep 13 01:38:55.282410 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:38:56.339148 kubelet[2605]: I0913 01:38:56.339084 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h6xb5" podStartSLOduration=9.339067288 podStartE2EDuration="9.339067288s" podCreationTimestamp="2025-09-13 01:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:38:52.995249128 +0000 UTC m=+228.744068052" watchObservedRunningTime="2025-09-13 01:38:56.339067288 +0000 UTC m=+232.087886212" Sep 13 01:38:56.931531 systemd-networkd[1769]: lxc_health: Gained IPv6LL Sep 13 01:39:01.484738 systemd[1]: run-containerd-runc-k8s.io-3e0ce608db15e77d7ef58ab96d977761083b6d854ed79b08485dff5f3a9efb82-runc.uXLFVF.mount: Deactivated successfully. Sep 13 01:39:01.617742 sshd[4537]: pam_unix(sshd:session): session closed for user core Sep 13 01:39:01.620416 systemd[1]: sshd@24-10.200.20.20:22-10.200.16.10:34668.service: Deactivated successfully. Sep 13 01:39:01.621175 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 01:39:01.622248 systemd-logind[1563]: Session 27 logged out. Waiting for processes to exit. Sep 13 01:39:01.623142 systemd-logind[1563]: Removed session 27. Sep 13 01:39:04.472512 env[1591]: time="2025-09-13T01:39:04.472462474Z" level=info msg="StopPodSandbox for \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\"" Sep 13 01:39:04.472858 env[1591]: time="2025-09-13T01:39:04.472580354Z" level=info msg="TearDown network for sandbox \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\" successfully" Sep 13 01:39:04.472858 env[1591]: time="2025-09-13T01:39:04.472623154Z" level=info msg="StopPodSandbox for \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\" returns successfully" Sep 13 01:39:04.473102 env[1591]: time="2025-09-13T01:39:04.473063874Z" level=info msg="RemovePodSandbox for \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\"" Sep 13 01:39:04.473135 env[1591]: time="2025-09-13T01:39:04.473103514Z" level=info msg="Forcibly stopping sandbox \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\"" Sep 13 01:39:04.473197 env[1591]: time="2025-09-13T01:39:04.473175714Z" level=info msg="TearDown network for sandbox \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\" successfully" Sep 13 01:39:04.487577 env[1591]: time="2025-09-13T01:39:04.487516428Z" level=info msg="RemovePodSandbox \"cf30bb1a48dd0d34c98fee99779c5c958b4809e5fada03362d311f82350c2c70\" returns successfully" Sep 13 01:39:04.488282 env[1591]: time="2025-09-13T01:39:04.488124787Z" level=info msg="StopPodSandbox for \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\"" Sep 13 01:39:04.488282 env[1591]: time="2025-09-13T01:39:04.488201827Z" level=info msg="TearDown network for sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" successfully" Sep 13 01:39:04.488282 env[1591]: time="2025-09-13T01:39:04.488233507Z" level=info msg="StopPodSandbox for \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" returns successfully" Sep 13 01:39:04.490107 env[1591]: time="2025-09-13T01:39:04.488697067Z" level=info msg="RemovePodSandbox for \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\"" Sep 13 01:39:04.490107 env[1591]: time="2025-09-13T01:39:04.488719587Z" level=info msg="Forcibly stopping sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\"" Sep 13 01:39:04.490107 env[1591]: time="2025-09-13T01:39:04.488778907Z" level=info msg="TearDown network for sandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" successfully" Sep 13 01:39:04.497061 env[1591]: time="2025-09-13T01:39:04.497016263Z" level=info msg="RemovePodSandbox \"d30e41e60b963f23395fa043972e671e666fcb76c4e8e00e0191054980be229d\" returns successfully" Sep 13 01:39:04.497709 env[1591]: time="2025-09-13T01:39:04.497676383Z" level=info msg="StopPodSandbox for \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\"" Sep 13 01:39:04.497812 env[1591]: time="2025-09-13T01:39:04.497769223Z" level=info msg="TearDown network for sandbox \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\" successfully" Sep 13 01:39:04.497858 env[1591]: time="2025-09-13T01:39:04.497809543Z" level=info msg="StopPodSandbox for \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\" returns successfully" Sep 13 01:39:04.498164 env[1591]: time="2025-09-13T01:39:04.498144183Z" level=info msg="RemovePodSandbox for \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\"" Sep 13 01:39:04.498283 env[1591]: time="2025-09-13T01:39:04.498251183Z" level=info msg="Forcibly stopping sandbox \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\"" Sep 13 01:39:04.498405 env[1591]: time="2025-09-13T01:39:04.498371063Z" level=info msg="TearDown network for sandbox \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\" successfully" Sep 13 01:39:04.506767 env[1591]: time="2025-09-13T01:39:04.506727779Z" level=info msg="RemovePodSandbox \"b2a602b843cc466eae42ab30700119eebdef808b6f608c320a603e4d2d68ef02\" returns successfully"